diff --git a/docs/_index.md b/docs/_index.md index 6d01008..7ef6441 100644 --- a/docs/_index.md +++ b/docs/_index.md @@ -5,4 +5,5 @@ title = 'GitOps' This is the documentation for using GitOps to manage Strimzi-based event-driven infrastructure. - [Introduction to GitOps](introduction/_index.md) — Why GitOps over ClickOps, core concepts, and common technologies. +- [Kafka Examples](kafka-examples/_index.md) — Bootstrap, infrastructure, and scenario deployment guide. diff --git a/docs/kafka-examples/_index.md b/docs/kafka-examples/_index.md new file mode 100644 index 0000000..1151528 --- /dev/null +++ b/docs/kafka-examples/_index.md @@ -0,0 +1,177 @@ ++++ +title = 'Deploying Kafka with ArgoCD' ++++ + +The [introduction](../introduction/_index.md) covered why GitOps matters. +This document shows what it looks like in practice — deploying Apache Kafka infrastructure entirely through ArgoCD, from operators to running clusters. + +## How deployment works + +Everything in this repository is an ArgoCD Application. +An Application tells ArgoCD: "watch this directory in Git, and keep the cluster in sync with what's there." + +When you apply an Application, ArgoCD reads the Kustomize manifests from the specified directory, renders them, and creates the Kubernetes resources. +If someone changes a file in the repository, ArgoCD detects the change and updates the cluster. +If someone changes something directly on the cluster, ArgoCD detects the drift and corrects it. + +This means the Git repository is always the source of truth. +There is no need to run `kubectl apply` on individual manifests, no scripts to remember, no runbooks to follow. + +## The deployment flow + +The only manual step is installing ArgoCD itself. +After that, every operator and every Kafka scenario is deployed by applying a single YAML file — the ArgoCD Application. + +1. **Install ArgoCD** (manual, one-time) — `kubectl apply -k` +2. **Deploy operators** — one `kubectl apply -f application.yaml` per operator (Strimzi, Keycloak, ESO) +3. **Deploy scenarios** — one `kubectl apply -f application.yaml` per scenario (basic-kafka, kafka-mirror, kafka-oauth) + +Or skip steps 2 and 3 entirely and deploy everything at once with the app-of-apps ApplicationSet. + +### Step 1: ArgoCD bootstrap + +ArgoCD is the one component that can't deploy itself. +The `examples/argo-cd/` directory contains Kustomize overlays for OpenShift and Kubernetes. + +On OpenShift, the install is two steps — the operator and then the instance: + +```bash +kubectl apply -k examples/argo-cd/overlays/openshift/operator +kubectl apply -k examples/argo-cd/overlays/openshift/instance +``` + +The OpenShift overlay includes two important configurations: +- `ARGOCD_CLUSTER_CONFIG_NAMESPACES=argocd` on the operator subscription, which grants the ArgoCD instance cluster-scope permissions. +- A `streamshub` AppProject with `clusterResourceWhitelist`, which allows ArgoCD to manage cluster-scoped resources like Namespaces. + +On Kubernetes: + +```bash +kubectl apply -k examples/argo-cd/overlays/kubernetes +``` + +### Step 2: Deploy operators + +Each operator has an ArgoCD Application file. +Applying it tells ArgoCD to install and manage the operator. + +For example, deploying the Strimzi operator on OpenShift: + +```bash +kubectl apply -f examples/operators/strimzi/overlays/streams-for-kafka/argocd/application.yaml +``` + +That single command creates the namespace, the OLM Subscription, the OperatorGroup, and starts the operator. +ArgoCD keeps it in sync — if someone accidentally deletes the Subscription, ArgoCD recreates it. + +The same pattern works for every operator: + +```bash +# Keycloak / RHBK (needed for the OAuth scenario) +kubectl apply -f examples/operators/keycloak/overlays/rhbk/argocd/application.yaml + +# External Secrets Operator (needed for the mirror scenario) +kubectl apply -f examples/operators/external-secrets/overlays/openshift/argocd/application.yaml +``` + +### Step 3: Deploy scenarios + +Scenarios work exactly the same way — each is an ArgoCD Application: + +```bash +kubectl apply -f examples/scenarios/basic-kafka/argocd/application.yaml +``` + +ArgoCD creates the namespace, deploys the Kafka cluster, node pools, metrics configuration, and all supporting resources. +The Strimzi operator then takes over and creates the actual broker and controller pods. + +### Deploying everything at once + +Instead of applying Applications one by one, the app-of-apps pattern deploys all operators and scenarios with a single command. +The ApplicationSet uses a combination of a list generator (for operators) and a Git directory generator (for scenarios): + +```bash +# OpenShift with Streams for Apache Kafka + RHBK +kubectl apply -f examples/app-of-apps/overlays/openshift/applicationset.yaml + +# Kubernetes with community Strimzi + Keycloak +kubectl apply -f examples/app-of-apps/overlays/kubernetes/applicationset.yaml +``` + +Adding a new scenario directory under `examples/scenarios/` automatically creates a new ArgoCD Application on the next sync. + +## What the scenarios deploy + +### Basic Kafka + +A single Kafka cluster in KRaft mode with three controllers and three brokers. +Includes Cruise Control for automatic rebalancing, Entity Operator for managing topics and users as custom resources, and JMX Prometheus metrics. + +Two listeners: plain-text on port 9092 and TLS with client certificate authentication on port 9093. + +### Kafka Mirror + +Two independent Kafka clusters with MirrorMaker 2 replicating all topics from source to target. + +This scenario demonstrates a practical Kubernetes challenge: MirrorMaker 2 runs in the `kafka-target` namespace but needs TLS credentials from `kafka-source`. +The External Secrets Operator solves this by automatically syncing secrets between namespaces. +A `SecretStore` reads from `kafka-source`, and `ExternalSecret` resources pull the CA certificate and user credentials into `kafka-target`. +If the source cluster rotates its CA certificate, the change propagates automatically. + +### Kafka OAuth + +A complete OAuth 2.0 setup: Keycloak as the OIDC provider with a Kafka-specific realm, and a Kafka cluster with token-based authentication on the TLS listener. + +Strimzi v1 replaced the built-in `type: oauth` listener with `type: custom`. +The OAuth libraries remain bundled in the Kafka images — only the API changed. +The JAAS configuration tells Kafka to validate JWT tokens against Keycloak's JWKS endpoint. + +The Keycloak operator is installed into the `keycloak` namespace, and the Keycloak instance is deployed in the same namespace. +This is important because the Keycloak operator watches only its own namespace by default. + +The realm includes test users and OAuth clients. +You can test the setup using [strimzi/test-clients](https://github.com/strimzi/test-clients) Jobs that authenticate via client credentials flow. + +## Making changes + +The power of this setup is what happens after the initial deployment. + +### Scaling brokers + +To add a broker, change `replicas: 3` to `replicas: 4` in `brokers.yaml`, commit, and push. +ArgoCD detects the change, updates the KafkaNodePool, and Strimzi creates the new broker pod. +Cruise Control automatically rebalances partitions to include the new broker. + +### Updating configuration + +To change a Kafka setting like retention time, edit `kafka.yaml`, commit, and push. +ArgoCD syncs the change, and the Strimzi operator performs a rolling restart of the brokers. + +### Adding a new scenario + +Create a new directory under `examples/scenarios/` with a `kustomization.yaml` and the Kafka resources. +If you're using the app-of-apps ApplicationSet, ArgoCD automatically discovers it and creates a new Application. + +## Community and product support + +All scenarios use the `kafka.strimzi.io/v1` CRD API, which is identical for both the community project and the commercial product. +The only difference is which operator you install. + +| Component | Community | Product | +|-----------|-----------|---------| +| Kafka operator | Strimzi | Streams for Apache Kafka (`amq-streams`) | +| Identity provider | Keycloak | Red Hat Build of Keycloak (`rhbk-operator`) | +| Secret management | ESO (Helm chart) | ESO for Red Hat OpenShift (`openshift-external-secrets-operator`) | + +Switching between community and product is a matter of deploying a different operator Application. +The scenarios themselves don't change. + +## What's next + +These scenarios cover the foundational Kafka patterns. +Future additions could include: + +- **KafkaConnect with Debezium** for change data capture +- **StreamsHub Console** for Kafka cluster management UI +- **Kroxylicious** for Kafka proxy with encryption and schema enforcement +- **Monitoring** with Prometheus and Grafana dashboards diff --git a/examples/app-of-apps/README.md b/examples/app-of-apps/README.md new file mode 100644 index 0000000..6d5c282 --- /dev/null +++ b/examples/app-of-apps/README.md @@ -0,0 +1,58 @@ +# App-of-Apps Pattern + +An ArgoCD ApplicationSet that deploys all infrastructure operators and scenarios from this repository with a single command. + +## Platform-Specific Variants + +Since infrastructure operators differ between platforms (OLM on OpenShift vs upstream manifests on Kubernetes), there are platform-specific ApplicationSets: + +- `overlays/openshift/` — deploys Streams for Apache Kafka, RHBK, ESO (Red Hat), and all scenarios +- `overlays/kubernetes/` — deploys community Strimzi, Keycloak, and all scenarios + +## Quick Start + +### OpenShift + +```bash +kubectl apply -f overlays/openshift/applicationset.yaml +``` + +### Kubernetes + +```bash +kubectl apply -f overlays/kubernetes/applicationset.yaml +``` + +## What Gets Deployed + +### OpenShift + +| Application | Path | +|-------------|------| +| `streams-for-kafka-operator` | `operators/strimzi/overlays/streams-for-kafka` | +| `rhbk-operator` | `operators/keycloak/overlays/rhbk` | +| `external-secrets-operator` | `operators/external-secrets/overlays/openshift` | +| `basic-kafka` | `scenarios/basic-kafka` | +| `kafka-mirror` | `scenarios/kafka-mirror` | +| `kafka-oauth` | `scenarios/kafka-oauth` | + +### Kubernetes + +| Application | Path | +|-------------|------| +| `strimzi-operator` | `operators/strimzi/overlays/kubernetes` | +| `keycloak-operator` | `operators/keycloak/overlays/kubernetes` | +| `basic-kafka` | `scenarios/basic-kafka` | +| `kafka-mirror` | `scenarios/kafka-mirror` | +| `kafka-oauth` | `scenarios/kafka-oauth` | + +## Selective Deployment + +To deploy only specific components, modify the `generators` section: +- Remove entries from the `list` generator to skip specific operators +- Add `exclude` entries to the `git` generator to skip specific scenarios + +## Prerequisites + +- ArgoCD installed (see [ArgoCD installation](../argo-cd/)) +- The `streamshub` AppProject must exist (created as part of the ArgoCD instance setup) diff --git a/examples/app-of-apps/overlays/kubernetes/applicationset.yaml b/examples/app-of-apps/overlays/kubernetes/applicationset.yaml new file mode 100644 index 0000000..2af356d --- /dev/null +++ b/examples/app-of-apps/overlays/kubernetes/applicationset.yaml @@ -0,0 +1,48 @@ +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: streamshub-gitops + namespace: argocd +spec: + goTemplate: true + goTemplateOptions: ["missingkey=error"] + generators: + - list: + elements: + - name: strimzi-operator + path: examples/operators/strimzi/overlays/kubernetes + namespace: strimzi-operator + - name: keycloak-operator + path: examples/operators/keycloak/overlays/kubernetes + namespace: keycloak + - git: + repoURL: https://github.com/streamshub/streamshub-gitops.git + # TODO: switch back to HEAD before merge + revision: argocd-kafka-examples + directories: + - path: examples/scenarios/* + template: + metadata: + name: '{{ default .path.basename .name }}' + labels: + app.kubernetes.io/part-of: streamshub-gitops + spec: + project: streamshub + source: + repoURL: https://github.com/streamshub/streamshub-gitops.git + # TODO: switch back to HEAD before merge + targetRevision: argocd-kafka-examples + path: '{{ default .path .path.path }}' + destination: + server: https://kubernetes.default.svc + namespace: '{{ default "" .namespace }}' + syncPolicy: + automated: + prune: false + selfHeal: true + managedNamespaceMetadata: + labels: + argocd.argoproj.io/managed-by: argocd + syncOptions: + - CreateNamespace=true + - ServerSideApply=true diff --git a/examples/app-of-apps/overlays/openshift/applicationset.yaml b/examples/app-of-apps/overlays/openshift/applicationset.yaml new file mode 100644 index 0000000..ffb5131 --- /dev/null +++ b/examples/app-of-apps/overlays/openshift/applicationset.yaml @@ -0,0 +1,57 @@ +apiVersion: argoproj.io/v1alpha1 +kind: ApplicationSet +metadata: + name: streamshub-gitops + namespace: argocd +spec: + goTemplate: true + goTemplateOptions: ["missingkey=error"] + generators: + - list: + elements: + - name: streams-for-kafka-operator + path: examples/operators/strimzi/overlays/streams-for-kafka + namespace: streams-kafka-operator + - name: rhbk-operator + path: examples/operators/keycloak/overlays/rhbk + namespace: keycloak + - name: external-secrets-operator + path: examples/operators/external-secrets/overlays/openshift + namespace: external-secrets-operator + - git: + repoURL: https://github.com/streamshub/streamshub-gitops.git + # TODO: switch back to HEAD before merge + revision: argocd-kafka-examples + directories: + - path: examples/scenarios/* + template: + metadata: + name: '{{ default .path.basename .name }}' + labels: + app.kubernetes.io/part-of: streamshub-gitops + spec: + project: streamshub + source: + repoURL: https://github.com/streamshub/streamshub-gitops.git + # TODO: switch back to HEAD before merge + targetRevision: argocd-kafka-examples + path: '{{ default .path .path.path }}' + destination: + server: https://kubernetes.default.svc + namespace: '{{ default "" .namespace }}' + syncPolicy: + automated: + prune: false + selfHeal: true + managedNamespaceMetadata: + labels: + argocd.argoproj.io/managed-by: argocd + syncOptions: + - CreateNamespace=true + - SkipDryRunOnMissingResource=true + retry: + limit: 5 + backoff: + duration: 30s + maxDuration: 5m + factor: 2 diff --git a/examples/infrastructure/argo-cd/README.md b/examples/argo-cd/README.md similarity index 100% rename from examples/infrastructure/argo-cd/README.md rename to examples/argo-cd/README.md diff --git a/examples/infrastructure/argo-cd/overlays/kubernetes/kustomization.yaml b/examples/argo-cd/overlays/kubernetes/kustomization.yaml similarity index 100% rename from examples/infrastructure/argo-cd/overlays/kubernetes/kustomization.yaml rename to examples/argo-cd/overlays/kubernetes/kustomization.yaml diff --git a/examples/infrastructure/argo-cd/overlays/kubernetes/namespace.yaml b/examples/argo-cd/overlays/kubernetes/namespace.yaml similarity index 100% rename from examples/infrastructure/argo-cd/overlays/kubernetes/namespace.yaml rename to examples/argo-cd/overlays/kubernetes/namespace.yaml diff --git a/examples/argo-cd/overlays/openshift/instance/app-project.yaml b/examples/argo-cd/overlays/openshift/instance/app-project.yaml new file mode 100644 index 0000000..fbf2d12 --- /dev/null +++ b/examples/argo-cd/overlays/openshift/instance/app-project.yaml @@ -0,0 +1,14 @@ +apiVersion: argoproj.io/v1alpha1 +kind: AppProject +metadata: + name: streamshub + namespace: argocd +spec: + clusterResourceWhitelist: + - group: '*' + kind: '*' + destinations: + - namespace: '*' + server: '*' + sourceRepos: + - '*' diff --git a/examples/infrastructure/argo-cd/overlays/openshift/instance/argocd.yaml b/examples/argo-cd/overlays/openshift/instance/argocd.yaml similarity index 100% rename from examples/infrastructure/argo-cd/overlays/openshift/instance/argocd.yaml rename to examples/argo-cd/overlays/openshift/instance/argocd.yaml diff --git a/examples/infrastructure/argo-cd/overlays/openshift/instance/kustomization.yaml b/examples/argo-cd/overlays/openshift/instance/kustomization.yaml similarity index 84% rename from examples/infrastructure/argo-cd/overlays/openshift/instance/kustomization.yaml rename to examples/argo-cd/overlays/openshift/instance/kustomization.yaml index c8dbbe4..390a185 100644 --- a/examples/infrastructure/argo-cd/overlays/openshift/instance/kustomization.yaml +++ b/examples/argo-cd/overlays/openshift/instance/kustomization.yaml @@ -4,3 +4,4 @@ kind: Kustomization resources: - namespace.yaml - argocd.yaml + - app-project.yaml diff --git a/examples/infrastructure/argo-cd/overlays/openshift/instance/namespace.yaml b/examples/argo-cd/overlays/openshift/instance/namespace.yaml similarity index 100% rename from examples/infrastructure/argo-cd/overlays/openshift/instance/namespace.yaml rename to examples/argo-cd/overlays/openshift/instance/namespace.yaml diff --git a/examples/infrastructure/argo-cd/overlays/openshift/operator/kustomization.yaml b/examples/argo-cd/overlays/openshift/operator/kustomization.yaml similarity index 100% rename from examples/infrastructure/argo-cd/overlays/openshift/operator/kustomization.yaml rename to examples/argo-cd/overlays/openshift/operator/kustomization.yaml diff --git a/examples/infrastructure/argo-cd/overlays/openshift/operator/namespace.yaml b/examples/argo-cd/overlays/openshift/operator/namespace.yaml similarity index 100% rename from examples/infrastructure/argo-cd/overlays/openshift/operator/namespace.yaml rename to examples/argo-cd/overlays/openshift/operator/namespace.yaml diff --git a/examples/infrastructure/argo-cd/overlays/openshift/operator/operatorgroup.yaml b/examples/argo-cd/overlays/openshift/operator/operatorgroup.yaml similarity index 100% rename from examples/infrastructure/argo-cd/overlays/openshift/operator/operatorgroup.yaml rename to examples/argo-cd/overlays/openshift/operator/operatorgroup.yaml diff --git a/examples/infrastructure/argo-cd/overlays/openshift/operator/subscription.yaml b/examples/argo-cd/overlays/openshift/operator/subscription.yaml similarity index 77% rename from examples/infrastructure/argo-cd/overlays/openshift/operator/subscription.yaml rename to examples/argo-cd/overlays/openshift/operator/subscription.yaml index 5f64c2c..08f1919 100644 --- a/examples/infrastructure/argo-cd/overlays/openshift/operator/subscription.yaml +++ b/examples/argo-cd/overlays/openshift/operator/subscription.yaml @@ -9,3 +9,7 @@ spec: name: openshift-gitops-operator source: redhat-operators sourceNamespace: openshift-marketplace + config: + env: + - name: ARGOCD_CLUSTER_CONFIG_NAMESPACES + value: argocd diff --git a/examples/operators/external-secrets/README.md b/examples/operators/external-secrets/README.md new file mode 100644 index 0000000..04556c6 --- /dev/null +++ b/examples/operators/external-secrets/README.md @@ -0,0 +1,47 @@ +# External Secrets Operator + +The [External Secrets Operator (ESO)](https://external-secrets.io/) synchronizes secrets from external sources into Kubernetes Secrets. In this repository, it is used with the **Kubernetes provider** to replicate Strimzi-managed secrets across namespaces (e.g., cluster CA certificates and user credentials for MirrorMaker 2). + +## Structure + +- `overlays/` - Platform-specific configurations + - `kubernetes/` - Community ESO deployed via ArgoCD Application pointing to the upstream Helm chart + - `openshift/` - External Secrets Operator for Red Hat OpenShift via OLM (`redhat-operators` catalog, `stable-v1` channel) + +## Deploy via ArgoCD + +### Kubernetes + +```bash +kubectl apply -f overlays/kubernetes/application.yaml +``` + +ArgoCD installs ESO directly from the upstream Helm chart at `charts.external-secrets.io`. + +### OpenShift + +```bash +kubectl apply -f overlays/openshift/argocd/application.yaml +``` + +## Verify Installation + +```bash +kubectl get pods -n external-secrets-operator + +# Verify CRDs are installed +kubectl get crd | grep external-secrets +``` + +## Differences Between Platforms + +| Feature | Kubernetes | OpenShift | +|---------|------------|-----------| +| Installation | ArgoCD + Helm chart | OLM (redhat-operators) | +| Namespace | `external-secrets-operator` | `external-secrets-operator` | +| Support | Community | Red Hat commercial support | +| Updates | Update `targetRevision` in Application | Automatic via OLM | + +## How It Works + +ESO uses a `SecretStore` with the Kubernetes provider to read secrets from a remote namespace and create local copies via `ExternalSecret` resources. See the [kafka-mirror scenario](../../scenarios/kafka-mirror/) for a working example. diff --git a/examples/operators/external-secrets/overlays/kubernetes/application.yaml b/examples/operators/external-secrets/overlays/kubernetes/application.yaml new file mode 100644 index 0000000..1872589 --- /dev/null +++ b/examples/operators/external-secrets/overlays/kubernetes/application.yaml @@ -0,0 +1,26 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: external-secrets-operator + namespace: argocd +spec: + project: streamshub + source: + repoURL: https://charts.external-secrets.io + chart: external-secrets + targetRevision: 0.17.0 + helm: + releaseName: external-secrets + destination: + server: https://kubernetes.default.svc + namespace: external-secrets-operator + syncPolicy: + automated: + prune: false + selfHeal: true + managedNamespaceMetadata: + labels: + argocd.argoproj.io/managed-by: argocd + syncOptions: + - CreateNamespace=true + - ServerSideApply=true diff --git a/examples/operators/external-secrets/overlays/openshift/argocd/application.yaml b/examples/operators/external-secrets/overlays/openshift/argocd/application.yaml new file mode 100644 index 0000000..857b582 --- /dev/null +++ b/examples/operators/external-secrets/overlays/openshift/argocd/application.yaml @@ -0,0 +1,31 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: external-secrets-operator + namespace: argocd +spec: + project: streamshub + source: + repoURL: https://github.com/streamshub/streamshub-gitops.git + # TODO: switch back to HEAD before merge + targetRevision: argocd-kafka-examples + path: examples/operators/external-secrets/overlays/openshift + destination: + server: https://kubernetes.default.svc + namespace: external-secrets-operator + syncPolicy: + automated: + prune: false + selfHeal: true + managedNamespaceMetadata: + labels: + argocd.argoproj.io/managed-by: argocd + syncOptions: + - CreateNamespace=true + - SkipDryRunOnMissingResource=true + retry: + limit: 5 + backoff: + duration: 30s + maxDuration: 5m + factor: 2 diff --git a/examples/operators/external-secrets/overlays/openshift/external-secrets-config.yaml b/examples/operators/external-secrets/overlays/openshift/external-secrets-config.yaml new file mode 100644 index 0000000..b2ddb38 --- /dev/null +++ b/examples/operators/external-secrets/overlays/openshift/external-secrets-config.yaml @@ -0,0 +1,7 @@ +apiVersion: operator.openshift.io/v1alpha1 +kind: ExternalSecretsConfig +metadata: + name: cluster + annotations: + argocd.argoproj.io/sync-wave: "1" +spec: {} diff --git a/examples/operators/external-secrets/overlays/openshift/kustomization.yaml b/examples/operators/external-secrets/overlays/openshift/kustomization.yaml new file mode 100644 index 0000000..8ece59b --- /dev/null +++ b/examples/operators/external-secrets/overlays/openshift/kustomization.yaml @@ -0,0 +1,7 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - operatorgroup.yaml + - subscription.yaml + - external-secrets-config.yaml diff --git a/examples/operators/external-secrets/overlays/openshift/operatorgroup.yaml b/examples/operators/external-secrets/overlays/openshift/operatorgroup.yaml new file mode 100644 index 0000000..fb9cb0c --- /dev/null +++ b/examples/operators/external-secrets/overlays/openshift/operatorgroup.yaml @@ -0,0 +1,9 @@ +apiVersion: operators.coreos.com/v1 +kind: OperatorGroup +metadata: + name: external-secrets-operator + namespace: external-secrets-operator + annotations: + argocd.argoproj.io/sync-wave: "0" +spec: + upgradeStrategy: Default diff --git a/examples/operators/external-secrets/overlays/openshift/subscription.yaml b/examples/operators/external-secrets/overlays/openshift/subscription.yaml new file mode 100644 index 0000000..ba45287 --- /dev/null +++ b/examples/operators/external-secrets/overlays/openshift/subscription.yaml @@ -0,0 +1,13 @@ +apiVersion: operators.coreos.com/v1alpha1 +kind: Subscription +metadata: + name: openshift-external-secrets-operator + namespace: external-secrets-operator + annotations: + argocd.argoproj.io/sync-wave: "0" +spec: + channel: stable-v1 + installPlanApproval: Automatic + name: openshift-external-secrets-operator + source: redhat-operators + sourceNamespace: openshift-marketplace diff --git a/examples/operators/keycloak/README.md b/examples/operators/keycloak/README.md new file mode 100644 index 0000000..6705242 --- /dev/null +++ b/examples/operators/keycloak/README.md @@ -0,0 +1,48 @@ +# Keycloak / Red Hat Build of Keycloak (RHBK) Operator Installation + +## Structure + +- `overlays/` - Platform and product-specific configurations + - `kubernetes/` - Community Keycloak operator (v26.0.7) using upstream CRDs and operator manifests + - `rhbk/` - Red Hat Build of Keycloak operator via OLM (stable-v26.0 channel) + +## Deploy via ArgoCD + +### Kubernetes (Community Keycloak) + +```bash +kubectl apply -f overlays/kubernetes/argocd/application.yaml +``` + +### OpenShift (RHBK) + +```bash +kubectl apply -f overlays/rhbk/argocd/application.yaml +``` + +## Verify Installation + +```bash +# Kubernetes +kubectl get pods -n keycloak-operator + +# OpenShift (RHBK) +kubectl get pods -n rhbk-operator + +# Verify CRDs are installed +kubectl get crd | grep keycloak +``` + +## Differences Between Platforms + +| Feature | Kubernetes | OpenShift (RHBK) | +|---------|------------|------------------| +| Installation | Upstream manifests | OLM (redhat-operators) | +| Namespace | `keycloak-operator` | `rhbk-operator` | +| Updates | Manual manifest updates | Automatic via OLM | +| Support | Community | Red Hat commercial support | +| CRD API | `k8s.keycloak.org/v2alpha1` | `k8s.keycloak.org/v2alpha1` | + +## Next Steps + +Once the operator is running, deploy a Keycloak instance as part of the [kafka-oauth scenario](../../scenarios/kafka-oauth/). diff --git a/examples/operators/keycloak/overlays/kubernetes/argocd/application.yaml b/examples/operators/keycloak/overlays/kubernetes/argocd/application.yaml new file mode 100644 index 0000000..74ff1ac --- /dev/null +++ b/examples/operators/keycloak/overlays/kubernetes/argocd/application.yaml @@ -0,0 +1,25 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: keycloak-operator + namespace: argocd +spec: + project: streamshub + source: + repoURL: https://github.com/streamshub/streamshub-gitops.git + # TODO: switch back to HEAD before merge + targetRevision: argocd-kafka-examples + path: examples/operators/keycloak/overlays/kubernetes + destination: + server: https://kubernetes.default.svc + namespace: keycloak + syncPolicy: + automated: + prune: false + selfHeal: true + managedNamespaceMetadata: + labels: + argocd.argoproj.io/managed-by: argocd + syncOptions: + - CreateNamespace=true + - ServerSideApply=true diff --git a/examples/operators/keycloak/overlays/kubernetes/kustomization.yaml b/examples/operators/keycloak/overlays/kubernetes/kustomization.yaml new file mode 100644 index 0000000..569c8d0 --- /dev/null +++ b/examples/operators/keycloak/overlays/kubernetes/kustomization.yaml @@ -0,0 +1,15 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/refs/tags/26.0.7/kubernetes/keycloaks.k8s.keycloak.org-v1.yml + - https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/refs/tags/26.0.7/kubernetes/keycloakrealmimports.k8s.keycloak.org-v1.yml + - https://raw.githubusercontent.com/keycloak/keycloak-k8s-resources/refs/tags/26.0.7/kubernetes/kubernetes.yml + +namespace: keycloak + +labels: + - pairs: + app.kubernetes.io/name: keycloak-operator + app.kubernetes.io/part-of: keycloak + app.kubernetes.io/managed-by: kustomize diff --git a/examples/operators/keycloak/overlays/rhbk/argocd/application.yaml b/examples/operators/keycloak/overlays/rhbk/argocd/application.yaml new file mode 100644 index 0000000..6466668 --- /dev/null +++ b/examples/operators/keycloak/overlays/rhbk/argocd/application.yaml @@ -0,0 +1,24 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: rhbk-operator + namespace: argocd +spec: + project: streamshub + source: + repoURL: https://github.com/streamshub/streamshub-gitops.git + # TODO: switch back to HEAD before merge + targetRevision: argocd-kafka-examples + path: examples/operators/keycloak/overlays/rhbk + destination: + server: https://kubernetes.default.svc + namespace: keycloak + syncPolicy: + automated: + prune: false + selfHeal: true + managedNamespaceMetadata: + labels: + argocd.argoproj.io/managed-by: argocd + syncOptions: + - CreateNamespace=true diff --git a/examples/operators/keycloak/overlays/rhbk/kustomization.yaml b/examples/operators/keycloak/overlays/rhbk/kustomization.yaml new file mode 100644 index 0000000..b37ee5d --- /dev/null +++ b/examples/operators/keycloak/overlays/rhbk/kustomization.yaml @@ -0,0 +1,6 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - operatorgroup.yaml + - subscription.yaml diff --git a/examples/operators/keycloak/overlays/rhbk/operatorgroup.yaml b/examples/operators/keycloak/overlays/rhbk/operatorgroup.yaml new file mode 100644 index 0000000..6641507 --- /dev/null +++ b/examples/operators/keycloak/overlays/rhbk/operatorgroup.yaml @@ -0,0 +1,9 @@ +apiVersion: operators.coreos.com/v1 +kind: OperatorGroup +metadata: + name: rhbk-operator + namespace: keycloak +spec: + targetNamespaces: + - keycloak + upgradeStrategy: Default diff --git a/examples/operators/keycloak/overlays/rhbk/subscription.yaml b/examples/operators/keycloak/overlays/rhbk/subscription.yaml new file mode 100644 index 0000000..14c5dbc --- /dev/null +++ b/examples/operators/keycloak/overlays/rhbk/subscription.yaml @@ -0,0 +1,11 @@ +apiVersion: operators.coreos.com/v1alpha1 +kind: Subscription +metadata: + name: rhbk-operator + namespace: keycloak +spec: + channel: stable-v26.0 + installPlanApproval: Automatic + name: rhbk-operator + source: redhat-operators + sourceNamespace: openshift-marketplace diff --git a/examples/operators/strimzi/README.md b/examples/operators/strimzi/README.md new file mode 100644 index 0000000..be3a92b --- /dev/null +++ b/examples/operators/strimzi/README.md @@ -0,0 +1,61 @@ +# Strimzi / Streams for Apache Kafka Operator Installation + +## Structure + +- `overlays/` - Platform and product-specific configurations + - `kubernetes/` - Community Strimzi operator using upstream manifests from GitHub releases + - `openshift/` - Community Strimzi operator via OLM (community-operators catalog) + - `streams-for-kafka/` - Streams for Apache Kafka 3.2 operator via OLM (redhat-operators catalog) + +## Deploy via ArgoCD + +### Kubernetes (Community Strimzi) + +```bash +kubectl apply -f overlays/kubernetes/argocd/application.yaml +``` + +### OpenShift (Community Strimzi) + +```bash +kubectl apply -f overlays/openshift/argocd/application.yaml +``` + +### OpenShift (Streams for Apache Kafka 3.2) + +```bash +kubectl apply -f overlays/streams-for-kafka/argocd/application.yaml +``` + +## Verify Installation + +```bash +# Check the operator pod is running +kubectl get pods -n strimzi-operator + +# For Streams for Apache Kafka +kubectl get pods -n streams-kafka-operator + +# Verify CRDs are installed +kubectl get crd | grep strimzi +``` + +## Differences Between Platforms + +| Feature | Kubernetes | OpenShift (Strimzi) | OpenShift (SFAK) | +|---------|------------|---------------------|------------------| +| Installation | Upstream manifests | OLM (community-operators) | OLM (redhat-operators) | +| Namespace | `strimzi-operator` | `strimzi-operator` | `streams-kafka-operator` | +| Updates | Manual manifest updates | Automatic via OLM | Automatic via OLM | +| Support | Community | Community | IBM/Red Hat commercial support | +| CRD API | `kafka.strimzi.io/v1` | `kafka.strimzi.io/v1` | `kafka.strimzi.io/v1` | + +## Customization + +- **Watch namespaces**: By default, the operator watches all namespaces. To restrict this, modify the operator deployment environment variables. +- **Resource limits**: Adjust operator resource requests/limits in the deployment manifest (Kubernetes) or via the Subscription config (OLM). +- **Update channel**: Change the `channel` field in the Subscription to pin to a specific version stream. + +## Next Steps + +Once the operator is running, deploy a Kafka cluster using one of the [scenarios](../../scenarios/). diff --git a/examples/operators/strimzi/overlays/kubernetes/argocd/application.yaml b/examples/operators/strimzi/overlays/kubernetes/argocd/application.yaml new file mode 100644 index 0000000..6871f5d --- /dev/null +++ b/examples/operators/strimzi/overlays/kubernetes/argocd/application.yaml @@ -0,0 +1,25 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: strimzi-operator + namespace: argocd +spec: + project: streamshub + source: + repoURL: https://github.com/streamshub/streamshub-gitops.git + # TODO: switch back to HEAD before merge + targetRevision: argocd-kafka-examples + path: examples/operators/strimzi/overlays/kubernetes + destination: + server: https://kubernetes.default.svc + namespace: strimzi-operator + syncPolicy: + automated: + prune: false + selfHeal: true + managedNamespaceMetadata: + labels: + argocd.argoproj.io/managed-by: argocd + syncOptions: + - CreateNamespace=true + - ServerSideApply=true diff --git a/examples/operators/strimzi/overlays/kubernetes/kustomization.yaml b/examples/operators/strimzi/overlays/kubernetes/kustomization.yaml new file mode 100644 index 0000000..d2d35cc --- /dev/null +++ b/examples/operators/strimzi/overlays/kubernetes/kustomization.yaml @@ -0,0 +1,13 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - https://github.com/strimzi/strimzi-kafka-operator/releases/download/1.0.0/strimzi-cluster-operator-1.0.0.yaml + +namespace: strimzi-operator + +labels: + - pairs: + app.kubernetes.io/name: strimzi-operator + app.kubernetes.io/part-of: strimzi + app.kubernetes.io/managed-by: kustomize diff --git a/examples/operators/strimzi/overlays/openshift/argocd/application.yaml b/examples/operators/strimzi/overlays/openshift/argocd/application.yaml new file mode 100644 index 0000000..82817f1 --- /dev/null +++ b/examples/operators/strimzi/overlays/openshift/argocd/application.yaml @@ -0,0 +1,24 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: strimzi-operator + namespace: argocd +spec: + project: streamshub + source: + repoURL: https://github.com/streamshub/streamshub-gitops.git + # TODO: switch back to HEAD before merge + targetRevision: argocd-kafka-examples + path: examples/operators/strimzi/overlays/openshift + destination: + server: https://kubernetes.default.svc + namespace: strimzi-operator + syncPolicy: + automated: + prune: false + selfHeal: true + managedNamespaceMetadata: + labels: + argocd.argoproj.io/managed-by: argocd + syncOptions: + - CreateNamespace=true diff --git a/examples/operators/strimzi/overlays/openshift/kustomization.yaml b/examples/operators/strimzi/overlays/openshift/kustomization.yaml new file mode 100644 index 0000000..b37ee5d --- /dev/null +++ b/examples/operators/strimzi/overlays/openshift/kustomization.yaml @@ -0,0 +1,6 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - operatorgroup.yaml + - subscription.yaml diff --git a/examples/operators/strimzi/overlays/openshift/operatorgroup.yaml b/examples/operators/strimzi/overlays/openshift/operatorgroup.yaml new file mode 100644 index 0000000..ccb4c02 --- /dev/null +++ b/examples/operators/strimzi/overlays/openshift/operatorgroup.yaml @@ -0,0 +1,7 @@ +apiVersion: operators.coreos.com/v1 +kind: OperatorGroup +metadata: + name: strimzi-kafka-operator + namespace: strimzi-operator +spec: + upgradeStrategy: Default diff --git a/examples/operators/strimzi/overlays/openshift/subscription.yaml b/examples/operators/strimzi/overlays/openshift/subscription.yaml new file mode 100644 index 0000000..0334ab1 --- /dev/null +++ b/examples/operators/strimzi/overlays/openshift/subscription.yaml @@ -0,0 +1,11 @@ +apiVersion: operators.coreos.com/v1alpha1 +kind: Subscription +metadata: + name: strimzi-kafka-operator + namespace: strimzi-operator +spec: + channel: strimzi-1.0.x + installPlanApproval: Automatic + name: strimzi-kafka-operator + source: community-operators + sourceNamespace: openshift-marketplace diff --git a/examples/operators/strimzi/overlays/streams-for-kafka/argocd/application.yaml b/examples/operators/strimzi/overlays/streams-for-kafka/argocd/application.yaml new file mode 100644 index 0000000..e97b0d4 --- /dev/null +++ b/examples/operators/strimzi/overlays/streams-for-kafka/argocd/application.yaml @@ -0,0 +1,24 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: streams-for-kafka-operator + namespace: argocd +spec: + project: streamshub + source: + repoURL: https://github.com/streamshub/streamshub-gitops.git + # TODO: switch back to HEAD before merge + targetRevision: argocd-kafka-examples + path: examples/operators/strimzi/overlays/streams-for-kafka + destination: + server: https://kubernetes.default.svc + namespace: streams-kafka-operator + syncPolicy: + automated: + prune: false + selfHeal: true + managedNamespaceMetadata: + labels: + argocd.argoproj.io/managed-by: argocd + syncOptions: + - CreateNamespace=true diff --git a/examples/operators/strimzi/overlays/streams-for-kafka/kustomization.yaml b/examples/operators/strimzi/overlays/streams-for-kafka/kustomization.yaml new file mode 100644 index 0000000..b37ee5d --- /dev/null +++ b/examples/operators/strimzi/overlays/streams-for-kafka/kustomization.yaml @@ -0,0 +1,6 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - operatorgroup.yaml + - subscription.yaml diff --git a/examples/operators/strimzi/overlays/streams-for-kafka/operatorgroup.yaml b/examples/operators/strimzi/overlays/streams-for-kafka/operatorgroup.yaml new file mode 100644 index 0000000..16742ca --- /dev/null +++ b/examples/operators/strimzi/overlays/streams-for-kafka/operatorgroup.yaml @@ -0,0 +1,7 @@ +apiVersion: operators.coreos.com/v1 +kind: OperatorGroup +metadata: + name: streams-for-kafka-operator + namespace: streams-kafka-operator +spec: + upgradeStrategy: Default diff --git a/examples/operators/strimzi/overlays/streams-for-kafka/subscription.yaml b/examples/operators/strimzi/overlays/streams-for-kafka/subscription.yaml new file mode 100644 index 0000000..2a33476 --- /dev/null +++ b/examples/operators/strimzi/overlays/streams-for-kafka/subscription.yaml @@ -0,0 +1,11 @@ +apiVersion: operators.coreos.com/v1alpha1 +kind: Subscription +metadata: + name: streams-for-kafka-operator + namespace: streams-kafka-operator +spec: + channel: amq-streams-3.2.x + installPlanApproval: Automatic + name: amq-streams + source: redhat-operators + sourceNamespace: openshift-marketplace diff --git a/examples/scenarios/basic-kafka/README.md b/examples/scenarios/basic-kafka/README.md new file mode 100644 index 0000000..2c62226 --- /dev/null +++ b/examples/scenarios/basic-kafka/README.md @@ -0,0 +1,74 @@ +# Basic Kafka Cluster + +A production-like Apache Kafka cluster running in KRaft mode with 3 controllers and 3 brokers. + +## What Gets Deployed + +- **Namespace**: `kafka` +- **Kafka cluster** (`my-cluster`): KRaft mode, Kafka 4.0.0 +- **KafkaNodePool** `controllers`: 3 replicas, 10Gi storage +- **KafkaNodePool** `brokers`: 3 replicas, 100Gi storage +- **Entity Operator**: Topic Operator + User Operator +- **Cruise Control**: Auto-rebalancing on broker add/remove +- **Kafka Exporter**: Consumer lag metrics +- **JMX Prometheus Exporter**: Broker and Cruise Control metrics via ConfigMap + +## Listeners + +| Name | Port | Type | TLS | Authentication | +|------|------|------|-----|----------------| +| `plain` | 9092 | internal | No | None | +| `tls` | 9093 | internal | Yes | TLS client certificates | + +## Prerequisites + +- ArgoCD installed (see [ArgoCD installation](../../argo-cd/)) +- Strimzi or Streams for Apache Kafka operator deployed via ArgoCD (see [operator installation](../../operators/strimzi/)) + +## Deploy via ArgoCD + +```bash +kubectl apply -f argocd/application.yaml +``` + +Or deploy all scenarios at once using the [app-of-apps](../../app-of-apps/) ApplicationSet. + +## Verify + +```bash +# Check ArgoCD sync status +kubectl get application basic-kafka -n argocd + +# Check all pods are running +kubectl get pods -n kafka + +# Check Kafka cluster status +kubectl get kafka my-cluster -n kafka + +# Check node pools +kubectl get kafkanodepool -n kafka +``` + +## Connect to the Cluster + +```bash +# Internal plain-text bootstrap (from within the cluster) +my-cluster-kafka-bootstrap.kafka.svc:9092 + +# Internal TLS bootstrap (from within the cluster) +my-cluster-kafka-bootstrap.kafka.svc:9093 +``` + +## Customization + +- **Replicas**: Adjust `spec.replicas` in `controllers.yaml` and `brokers.yaml` +- **Storage**: Change `size` in the storage configuration of each node pool +- **Resources**: Modify CPU/memory requests and limits per node pool +- **Listeners**: Add additional listeners (e.g., `type: route` for OpenShift external access, `type: nodeport` or `type: loadbalancer` for Kubernetes) +- **Retention**: Adjust `log.retention.hours` in `kafka.yaml` (default: 7 days) + +## Compatibility + +These manifests use `kafka.strimzi.io/v1` CRDs and work with both: +- **Strimzi** 1.0.0+ (community) +- **Streams for Apache Kafka** 3.2+ (IBM/Red Hat) diff --git a/examples/scenarios/basic-kafka/argocd/application.yaml b/examples/scenarios/basic-kafka/argocd/application.yaml new file mode 100644 index 0000000..0236fd6 --- /dev/null +++ b/examples/scenarios/basic-kafka/argocd/application.yaml @@ -0,0 +1,24 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: basic-kafka + namespace: argocd +spec: + project: streamshub + source: + repoURL: https://github.com/streamshub/streamshub-gitops.git + # TODO: switch back to HEAD before merge + targetRevision: argocd-kafka-examples + path: examples/scenarios/basic-kafka + destination: + server: https://kubernetes.default.svc + namespace: kafka + syncPolicy: + automated: + prune: false + selfHeal: true + managedNamespaceMetadata: + labels: + argocd.argoproj.io/managed-by: argocd + syncOptions: + - CreateNamespace=true diff --git a/examples/scenarios/basic-kafka/brokers.yaml b/examples/scenarios/basic-kafka/brokers.yaml new file mode 100644 index 0000000..76e5ece --- /dev/null +++ b/examples/scenarios/basic-kafka/brokers.yaml @@ -0,0 +1,34 @@ +apiVersion: kafka.strimzi.io/v1 +kind: KafkaNodePool +metadata: + name: brokers + namespace: kafka + labels: + strimzi.io/cluster: my-cluster +spec: + replicas: 3 + roles: + - broker + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 10Gi + deleteClaim: false + resources: + requests: + memory: 1Gi + cpu: 500m + limits: + memory: 2Gi + cpu: "1" + template: + pod: + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: DoNotSchedule + labelSelector: + matchLabels: + strimzi.io/pool-name: brokers diff --git a/examples/scenarios/basic-kafka/controllers.yaml b/examples/scenarios/basic-kafka/controllers.yaml new file mode 100644 index 0000000..f26aca8 --- /dev/null +++ b/examples/scenarios/basic-kafka/controllers.yaml @@ -0,0 +1,34 @@ +apiVersion: kafka.strimzi.io/v1 +kind: KafkaNodePool +metadata: + name: controllers + namespace: kafka + labels: + strimzi.io/cluster: my-cluster +spec: + replicas: 3 + roles: + - controller + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 10Gi + deleteClaim: false + resources: + requests: + memory: 512Mi + cpu: 250m + limits: + memory: 1Gi + cpu: 500m + template: + pod: + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: DoNotSchedule + labelSelector: + matchLabels: + strimzi.io/pool-name: controllers diff --git a/examples/scenarios/basic-kafka/kafka.yaml b/examples/scenarios/basic-kafka/kafka.yaml new file mode 100644 index 0000000..8ddcbb5 --- /dev/null +++ b/examples/scenarios/basic-kafka/kafka.yaml @@ -0,0 +1,88 @@ +apiVersion: kafka.strimzi.io/v1 +kind: Kafka +metadata: + name: my-cluster + namespace: kafka +spec: + kafka: + version: 4.2.0 + metadataVersion: "4.2-IV0" + listeners: + - name: plain + port: 9092 + type: internal + tls: false + - name: tls + port: 9093 + type: internal + tls: true + authentication: + type: tls + config: + offsets.topic.replication.factor: 3 + transaction.state.log.replication.factor: 3 + transaction.state.log.min.isr: 2 + default.replication.factor: 3 + min.insync.replicas: 2 + log.retention.hours: 168 + log.segment.bytes: 1073741824 + authorization: + type: simple + superUsers: + - CN=admin-user + logging: + type: inline + loggers: + rootLogger.level: INFO + metricsConfig: + type: jmxPrometheusExporter + valueFrom: + configMapKeyRef: + name: kafka-metrics-config + key: kafka-metrics-config.yaml + entityOperator: + topicOperator: + resources: + requests: + cpu: 200m + memory: 256Mi + limits: + cpu: 500m + memory: 512Mi + userOperator: + resources: + requests: + cpu: 200m + memory: 256Mi + limits: + cpu: 500m + memory: 512Mi + cruiseControl: + resources: + requests: + cpu: 200m + memory: 256Mi + limits: + cpu: "1" + memory: 1Gi + autoRebalance: + - mode: add-brokers + - mode: remove-brokers + config: + cruise.control.metrics.topic.num.partitions: 1 + cruise.control.metrics.topic.replication.factor: 3 + cruise.control.metrics.topic.min.insync.replicas: 2 + metricsConfig: + type: jmxPrometheusExporter + valueFrom: + configMapKeyRef: + name: kafka-metrics-config + key: cruise-control-metrics-config.yaml + kafkaExporter: + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 256Mi diff --git a/examples/scenarios/basic-kafka/kustomization.yaml b/examples/scenarios/basic-kafka/kustomization.yaml new file mode 100644 index 0000000..0e43987 --- /dev/null +++ b/examples/scenarios/basic-kafka/kustomization.yaml @@ -0,0 +1,9 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - namespace.yaml + - metrics-config.yaml + - controllers.yaml + - brokers.yaml + - kafka.yaml diff --git a/examples/scenarios/basic-kafka/metrics-config.yaml b/examples/scenarios/basic-kafka/metrics-config.yaml new file mode 100644 index 0000000..66f865a --- /dev/null +++ b/examples/scenarios/basic-kafka/metrics-config.yaml @@ -0,0 +1,155 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: kafka-metrics-config + namespace: kafka +data: + kafka-metrics-config.yaml: | + lowercaseOutputName: true + rules: + - pattern: kafka.server<>Value + name: kafka_server_$1_$2 + type: GAUGE + labels: + clientId: "$3" + topic: "$4" + partition: "$5" + - pattern: kafka.server<>Value + name: kafka_server_$1_$2 + type: GAUGE + labels: + clientId: "$3" + broker: "$4:$5" + - pattern: kafka.server<>connections + name: kafka_server_$1_connections_tls_info + type: GAUGE + labels: + cipher: "$2" + protocol: "$3" + listener: "$4" + networkProcessor: "$5" + - pattern: kafka.server<>connections + name: kafka_server_$1_connections_software + type: GAUGE + labels: + clientSoftwareName: "$2" + clientSoftwareVersion: "$3" + listener: "$4" + networkProcessor: "$5" + - pattern: "kafka.server<>(.+-total):" + name: kafka_server_$1_$4 + type: COUNTER + labels: + listener: "$2" + networkProcessor: "$3" + - pattern: "kafka.server<>(.+):" + name: kafka_server_$1_$4 + type: GAUGE + labels: + listener: "$2" + networkProcessor: "$3" + - pattern: kafka.server<>(.+-total) + name: kafka_server_$1_$4 + type: COUNTER + labels: + listener: "$2" + networkProcessor: "$3" + - pattern: kafka.server<>(.+) + name: kafka_server_$1_$4 + type: GAUGE + labels: + listener: "$2" + networkProcessor: "$3" + - pattern: kafka.(\w+)<>MeanRate + name: kafka_$1_$2_$3_percent + type: GAUGE + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3_percent + type: GAUGE + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3_percent + type: GAUGE + labels: + "$4": "$5" + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_total + type: COUNTER + labels: + "$4": "$5" + "$6": "$7" + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_total + type: COUNTER + labels: + "$4": "$5" + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_total + type: COUNTER + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3 + type: GAUGE + labels: + "$4": "$5" + "$6": "$7" + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3 + type: GAUGE + labels: + "$4": "$5" + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3 + type: GAUGE + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_count + type: COUNTER + labels: + "$4": "$5" + "$6": "$7" + - pattern: kafka.(\w+)<>(\d+)thPercentile + name: kafka_$1_$2_$3 + type: GAUGE + labels: + "$4": "$5" + "$6": "$7" + quantile: "0.$8" + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_count + type: COUNTER + labels: + "$4": "$5" + - pattern: kafka.(\w+)<>(\d+)thPercentile + name: kafka_$1_$2_$3 + type: GAUGE + labels: + "$4": "$5" + quantile: "0.$6" + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_count + type: COUNTER + - pattern: kafka.(\w+)<>(\d+)thPercentile + name: kafka_$1_$2_$3 + type: GAUGE + labels: + quantile: "0.$4" + - pattern: "kafka.server<>(.+-total|.+-max):" + name: kafka_server_raftmetrics_$1 + type: COUNTER + - pattern: "kafka.server<>(.+):" + name: kafka_server_raftmetrics_$1 + type: GAUGE + - pattern: "kafka.server<>(.+-total|.+-max):" + name: kafka_server_raftchannelmetrics_$1 + type: COUNTER + - pattern: "kafka.server<>(.+):" + name: kafka_server_raftchannelmetrics_$1 + type: GAUGE + - pattern: "kafka.server<>(.+):" + name: kafka_server_brokermetadatametrics_$1 + type: GAUGE + + cruise-control-metrics-config.yaml: | + lowercaseOutputName: true + rules: + - pattern: "kafka.cruisecontrol<>(\\w+)" + name: "kafka_cruisecontrol_$1_$2" + type: "GAUGE" diff --git a/examples/scenarios/basic-kafka/namespace.yaml b/examples/scenarios/basic-kafka/namespace.yaml new file mode 100644 index 0000000..27f8b69 --- /dev/null +++ b/examples/scenarios/basic-kafka/namespace.yaml @@ -0,0 +1,6 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: kafka + labels: + name: kafka diff --git a/examples/scenarios/kafka-mirror/README.md b/examples/scenarios/kafka-mirror/README.md new file mode 100644 index 0000000..e9eb1b2 --- /dev/null +++ b/examples/scenarios/kafka-mirror/README.md @@ -0,0 +1,95 @@ +# Kafka Mirroring with MirrorMaker 2 + +Two Kafka clusters with MirrorMaker 2 replicating data from the source to the target cluster. + +## What Gets Deployed + +- **Source cluster** (`source`) in namespace `kafka-source`: 3 controllers + 3 brokers +- **Target cluster** (`target`) in namespace `kafka-target`: 3 controllers + 3 brokers +- **MirrorMaker 2** (`mirror`) in namespace `kafka-target`: 2 replicas replicating all topics +- **KafkaUser** `source-mirror-user` on source cluster (read access) and `mirror-user` on target cluster (read/write access) + +## Architecture + +``` +┌─────────────────────┐ ┌─────────────────────┐ +│ kafka-source │ │ kafka-target │ +│ │ │ │ +│ ┌───────────────┐ │ │ ┌───────────────┐ │ +│ │ source cluster│ │────────►│ │ target cluster│ │ +│ │ (3 ctrl + │ │ MM2 │ │ (3 ctrl + │ │ +│ │ 3 brokers) │ │ │ │ 3 brokers) │ │ +│ └───────────────┘ │ │ └───────────────┘ │ +│ │ │ │ +│ source-mirror-user │ │ mirror-user (r/w) │ +│ (read) │ │ │ +└─────────────────────┘ │ MirrorMaker 2 │ + └─────────────────────┘ +``` + +## MirrorMaker 2 Connectors + +| Connector | Purpose | +|-----------|---------| +| Source Connector | Replicates topic data from source to target | +| Heartbeat Connector | Monitors connectivity between clusters | +| Checkpoint Connector | Tracks consumer group offset mapping for failover | + +## Prerequisites + +- ArgoCD installed (see [ArgoCD installation](../../argo-cd/)) +- Strimzi or Streams for Apache Kafka operator deployed via ArgoCD (see [operator installation](../../operators/strimzi/)) +- [External Secrets Operator](../../operators/external-secrets/) deployed via ArgoCD for automatic cross-namespace secret replication + +## Cross-Namespace Secret Replication + +MirrorMaker 2 runs in `kafka-target` but needs access to the source cluster's CA certificate and user credentials from `kafka-source`. This scenario uses the [External Secrets Operator](../../operators/external-secrets/) with the Kubernetes provider to automatically replicate these secrets. + +The `secrets/` directory contains: +- **RBAC** — ServiceAccount in `kafka-target` with a Role/RoleBinding granting read access to secrets in `kafka-source` +- **SecretStore** — points to `kafka-source` namespace using the Kubernetes provider +- **ExternalSecrets** — pull `source-cluster-ca-cert` and `source-mirror-user` into `kafka-target` + +Secrets replicated automatically: +- `source-cluster-ca-cert` (source cluster CA certificate for TLS trust) +- `source-mirror-user` (TLS credentials for authenticating to the source cluster) + +## Deploy via ArgoCD + +```bash +kubectl apply -f argocd/application.yaml +``` + +Or deploy all scenarios at once using the [app-of-apps](../../app-of-apps/) ApplicationSet. + +ArgoCD deploys both clusters, ESO resources, and MirrorMaker 2. Once the source cluster is ready, the External Secrets Operator syncs the required secrets into `kafka-target` and MirrorMaker 2 starts replicating topics. + +## Verify + +```bash +# Check MirrorMaker 2 status +kubectl get kafkamirrormaker2 mirror -n kafka-target + +# Check connector status +kubectl get kafkamirrormaker2 mirror -n kafka-target -o jsonpath='{.status.connectors}' | jq . + +# Produce to source, verify on target +kubectl run kafka-producer -ti --image=quay.io/strimzi/kafka:latest-kafka-4.0.0 --rm=true --restart=Never -n kafka-source -- \ + bin/kafka-console-producer.sh --bootstrap-server source-kafka-bootstrap:9092 --topic test-topic + +kubectl run kafka-consumer -ti --image=quay.io/strimzi/kafka:latest-kafka-4.0.0 --rm=true --restart=Never -n kafka-target -- \ + bin/kafka-console-consumer.sh --bootstrap-server target-kafka-bootstrap:9092 --topic source.test-topic --from-beginning +``` + +## Customization + +- **Topic filtering**: Change `topicsPattern` in `mirror-maker/mirror-maker2.yaml` (default: `.*` mirrors all topics) +- **Replicas**: Adjust MirrorMaker 2 replicas for throughput +- **Bidirectional**: Add a second `mirrors` entry to replicate target back to source +- **Cross-cluster**: In production, source and target are typically on separate Kubernetes clusters. Update `bootstrapServers` to use external addresses and configure the ArgoCD Application for multi-cluster deployment. + +## Compatibility + +These manifests use `kafka.strimzi.io/v1` CRDs and work with both: +- **Strimzi** 1.0.0+ (community) +- **Streams for Apache Kafka** 3.2+ (IBM/Red Hat) diff --git a/examples/scenarios/kafka-mirror/argocd/application.yaml b/examples/scenarios/kafka-mirror/argocd/application.yaml new file mode 100644 index 0000000..e045754 --- /dev/null +++ b/examples/scenarios/kafka-mirror/argocd/application.yaml @@ -0,0 +1,23 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: kafka-mirror + namespace: argocd +spec: + project: streamshub + source: + repoURL: https://github.com/streamshub/streamshub-gitops.git + # TODO: switch back to HEAD before merge + targetRevision: argocd-kafka-examples + path: examples/scenarios/kafka-mirror + destination: + server: https://kubernetes.default.svc + syncPolicy: + automated: + prune: false + selfHeal: true + managedNamespaceMetadata: + labels: + argocd.argoproj.io/managed-by: argocd + syncOptions: + - CreateNamespace=true diff --git a/examples/scenarios/kafka-mirror/kustomization.yaml b/examples/scenarios/kafka-mirror/kustomization.yaml new file mode 100644 index 0000000..c72019c --- /dev/null +++ b/examples/scenarios/kafka-mirror/kustomization.yaml @@ -0,0 +1,8 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - source-cluster + - target-cluster + - secrets + - mirror-maker diff --git a/examples/scenarios/kafka-mirror/mirror-maker/kustomization.yaml b/examples/scenarios/kafka-mirror/mirror-maker/kustomization.yaml new file mode 100644 index 0000000..9dad563 --- /dev/null +++ b/examples/scenarios/kafka-mirror/mirror-maker/kustomization.yaml @@ -0,0 +1,6 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - metrics-config.yaml + - mirror-maker2.yaml diff --git a/examples/scenarios/kafka-mirror/mirror-maker/metrics-config.yaml b/examples/scenarios/kafka-mirror/mirror-maker/metrics-config.yaml new file mode 100644 index 0000000..58581a8 --- /dev/null +++ b/examples/scenarios/kafka-mirror/mirror-maker/metrics-config.yaml @@ -0,0 +1,126 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: mirror-maker-metrics-config + namespace: kafka-target +data: + metrics-config.yaml: | + lowercaseOutputName: true + lowercaseOutputLabelNames: true + rules: + - pattern: 'kafka.(.+)<>start-time-ms' + name: kafka_$1_start_time_seconds + labels: + clientId: "$2" + help: "Kafka $1 JMX metric start time seconds" + type: GAUGE + valueFactor: 0.001 + - pattern: 'kafka.(.+)<>(commit-id|version): (.+)' + name: kafka_$1_$3_info + value: 1 + labels: + clientId: "$2" + $3: "$4" + help: "Kafka $1 JMX metric info version and commit-id" + type: UNTYPED + - pattern: kafka.(.+)<>(.+-total) + name: kafka_$2_$6 + labels: + clientId: "$3" + topic: "$4" + partition: "$5" + type: COUNTER + - pattern: kafka.(.+)<>(compression-rate|.+-avg|.+-replica|.+-lag|.+-lead) + name: kafka_$2_$6 + labels: + clientId: "$3" + topic: "$4" + partition: "$5" + type: GAUGE + - pattern: kafka.(.+)<>(.+-total) + name: kafka_$2_$5 + labels: + clientId: "$3" + topic: "$4" + type: COUNTER + - pattern: kafka.(.+)<>(compression-rate|.+-avg) + name: kafka_$2_$5 + labels: + clientId: "$3" + topic: "$4" + type: GAUGE + - pattern: kafka.(.+)<>(.+-total) + name: kafka_$2_$5 + labels: + clientId: "$3" + nodeId: "$4" + type: COUNTER + - pattern: kafka.(.+)<>(.+-avg) + name: kafka_$2_$5 + labels: + clientId: "$3" + nodeId: "$4" + type: GAUGE + - pattern: kafka.(.+)<>(.+-total) + name: kafka_$2_$4 + labels: + clientId: "$3" + type: COUNTER + - pattern: kafka.(.+)<>(.+-avg|.+-bytes|.+-count|.+-ratio|.+-age|.+-flight|.+-threads|.+-connectors|.+-tasks|.+-ago) + name: kafka_$2_$4 + labels: + clientId: "$3" + type: GAUGE + - pattern: 'kafka.connect<>status: ([a-z-]+)' + name: kafka_connect_connector_status + value: 1 + labels: + connector: "$1" + task: "$2" + status: "$3" + type: GAUGE + - pattern: kafka.connect<>(.+-total) + name: kafka_connect_$1_$4 + labels: + connector: "$2" + task: "$3" + type: COUNTER + - pattern: kafka.connect<>(.+-count|.+-ms|.+-ratio|.+-avg|.+-failures|.+-requests|.+-timestamp|.+-logged|.+-errors|.+-retries|.+-skipped) + name: kafka_connect_$1_$4 + labels: + connector: "$2" + task: "$3" + type: GAUGE + - pattern: kafka.connect<>([a-z-]+) + name: kafka_connect_worker_$2 + labels: + connector: "$1" + type: GAUGE + - pattern: kafka.connect<>([a-z-]+-total) + name: kafka_connect_worker_$1 + type: COUNTER + - pattern: kafka.connect<>([a-z-]+) + name: kafka_connect_worker_$1 + type: GAUGE + - pattern: kafka.connect<>([a-z-]+-total) + name: kafka_connect_worker_rebalance_$1 + type: COUNTER + - pattern: kafka.connect<>([a-z-]+) + name: kafka_connect_worker_rebalance_$1 + type: GAUGE + - pattern: kafka.connect.mirror<>([a-z-_]+) + name: kafka_connect_mirror_mirrorsourceconnector_$4 + labels: + target: "$1" + topic: "$2" + partition: "$3" + type: GAUGE + - pattern: kafka.connect.mirror<>([a-z-_]+) + name: kafka_connect_mirror_mirrorcheckpointconnector_$6 + labels: + source: "$1" + target: "$2" + group: "$3" + topic: "$4" + partition: "$5" + type: GAUGE diff --git a/examples/scenarios/kafka-mirror/mirror-maker/mirror-maker2.yaml b/examples/scenarios/kafka-mirror/mirror-maker/mirror-maker2.yaml new file mode 100644 index 0000000..5a42b7d --- /dev/null +++ b/examples/scenarios/kafka-mirror/mirror-maker/mirror-maker2.yaml @@ -0,0 +1,74 @@ +apiVersion: kafka.strimzi.io/v1 +kind: KafkaMirrorMaker2 +metadata: + name: mirror + namespace: kafka-target +spec: + version: 4.2.0 + replicas: 1 + resources: + requests: + memory: 512Mi + cpu: 250m + limits: + memory: 1Gi + cpu: 500m + target: + alias: target + bootstrapServers: target-kafka-bootstrap.kafka-target.svc:9093 + groupId: mirror-group + configStorageTopic: mirror-config + offsetStorageTopic: mirror-offset + statusStorageTopic: mirror-status + tls: + trustedCertificates: + - secretName: target-cluster-ca-cert + pattern: "*.crt" + authentication: + type: tls + certificateAndKey: + secretName: mirror-user + certificate: user.crt + key: user.key + config: + config.storage.replication.factor: 3 + offset.storage.replication.factor: 3 + status.storage.replication.factor: 3 + mirrors: + - source: + alias: source + bootstrapServers: source-kafka-bootstrap.kafka-source.svc:9093 + tls: + trustedCertificates: + - secretName: source-cluster-ca-cert + pattern: "*.crt" + authentication: + type: tls + certificateAndKey: + secretName: source-mirror-user + certificate: user.crt + key: user.key + sourceConnector: + config: + replication.factor: 3 + offset-syncs.topic.replication.factor: 3 + sync.topic.acls.enabled: "false" + autoRestart: + enabled: true + checkpointConnector: + config: + checkpoints.topic.replication.factor: 3 + autoRestart: + enabled: true + topicsPattern: ".*" + groupsPattern: ".*" + logging: + type: inline + loggers: + rootLogger.level: INFO + metricsConfig: + type: jmxPrometheusExporter + valueFrom: + configMapKeyRef: + name: mirror-maker-metrics-config + key: metrics-config.yaml diff --git a/examples/scenarios/kafka-mirror/secrets/external-secrets.yaml b/examples/scenarios/kafka-mirror/secrets/external-secrets.yaml new file mode 100644 index 0000000..d1b0710 --- /dev/null +++ b/examples/scenarios/kafka-mirror/secrets/external-secrets.yaml @@ -0,0 +1,33 @@ +apiVersion: external-secrets.io/v1 +kind: ExternalSecret +metadata: + name: source-cluster-ca-cert + namespace: kafka-target +spec: + refreshInterval: 1m + secretStoreRef: + name: kafka-source-store + kind: SecretStore + target: + name: source-cluster-ca-cert + creationPolicy: Owner + dataFrom: + - extract: + key: source-cluster-ca-cert +--- +apiVersion: external-secrets.io/v1 +kind: ExternalSecret +metadata: + name: source-mirror-user + namespace: kafka-target +spec: + refreshInterval: 1m + secretStoreRef: + name: kafka-source-store + kind: SecretStore + target: + name: source-mirror-user + creationPolicy: Owner + dataFrom: + - extract: + key: source-mirror-user diff --git a/examples/scenarios/kafka-mirror/secrets/kustomization.yaml b/examples/scenarios/kafka-mirror/secrets/kustomization.yaml new file mode 100644 index 0000000..baa5248 --- /dev/null +++ b/examples/scenarios/kafka-mirror/secrets/kustomization.yaml @@ -0,0 +1,7 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - rbac.yaml + - secret-store.yaml + - external-secrets.yaml diff --git a/examples/scenarios/kafka-mirror/secrets/rbac.yaml b/examples/scenarios/kafka-mirror/secrets/rbac.yaml new file mode 100644 index 0000000..43be4bc --- /dev/null +++ b/examples/scenarios/kafka-mirror/secrets/rbac.yaml @@ -0,0 +1,29 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: eso-secret-reader + namespace: kafka-target +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: eso-secret-reader + namespace: kafka-source +rules: + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "list", "watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: eso-secret-reader + namespace: kafka-source +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: eso-secret-reader +subjects: + - kind: ServiceAccount + name: eso-secret-reader + namespace: kafka-target diff --git a/examples/scenarios/kafka-mirror/secrets/secret-store.yaml b/examples/scenarios/kafka-mirror/secrets/secret-store.yaml new file mode 100644 index 0000000..b2e5046 --- /dev/null +++ b/examples/scenarios/kafka-mirror/secrets/secret-store.yaml @@ -0,0 +1,17 @@ +apiVersion: external-secrets.io/v1 +kind: SecretStore +metadata: + name: kafka-source-store + namespace: kafka-target +spec: + provider: + kubernetes: + remoteNamespace: kafka-source + server: + caProvider: + type: ConfigMap + name: kube-root-ca.crt + key: ca.crt + auth: + serviceAccount: + name: eso-secret-reader diff --git a/examples/scenarios/kafka-mirror/source-cluster/brokers.yaml b/examples/scenarios/kafka-mirror/source-cluster/brokers.yaml new file mode 100644 index 0000000..f7e5d3c --- /dev/null +++ b/examples/scenarios/kafka-mirror/source-cluster/brokers.yaml @@ -0,0 +1,34 @@ +apiVersion: kafka.strimzi.io/v1 +kind: KafkaNodePool +metadata: + name: brokers + namespace: kafka-source + labels: + strimzi.io/cluster: source +spec: + replicas: 3 + roles: + - broker + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 10Gi + deleteClaim: false + resources: + requests: + memory: 1Gi + cpu: 500m + limits: + memory: 2Gi + cpu: "1" + template: + pod: + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: DoNotSchedule + labelSelector: + matchLabels: + strimzi.io/pool-name: brokers diff --git a/examples/scenarios/kafka-mirror/source-cluster/controllers.yaml b/examples/scenarios/kafka-mirror/source-cluster/controllers.yaml new file mode 100644 index 0000000..0570ddf --- /dev/null +++ b/examples/scenarios/kafka-mirror/source-cluster/controllers.yaml @@ -0,0 +1,34 @@ +apiVersion: kafka.strimzi.io/v1 +kind: KafkaNodePool +metadata: + name: controllers + namespace: kafka-source + labels: + strimzi.io/cluster: source +spec: + replicas: 3 + roles: + - controller + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 10Gi + deleteClaim: false + resources: + requests: + memory: 512Mi + cpu: 250m + limits: + memory: 1Gi + cpu: 500m + template: + pod: + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: DoNotSchedule + labelSelector: + matchLabels: + strimzi.io/pool-name: controllers diff --git a/examples/scenarios/kafka-mirror/source-cluster/kafka.yaml b/examples/scenarios/kafka-mirror/source-cluster/kafka.yaml new file mode 100644 index 0000000..7e5f28d --- /dev/null +++ b/examples/scenarios/kafka-mirror/source-cluster/kafka.yaml @@ -0,0 +1,120 @@ +apiVersion: kafka.strimzi.io/v1 +kind: Kafka +metadata: + name: source + namespace: kafka-source +spec: + kafka: + version: 4.2.0 + metadataVersion: "4.2-IV0" + listeners: + - name: plain + port: 9092 + type: internal + tls: false + - name: tls + port: 9093 + type: internal + tls: true + authentication: + type: tls + config: + offsets.topic.replication.factor: 3 + transaction.state.log.replication.factor: 3 + transaction.state.log.min.isr: 2 + default.replication.factor: 3 + min.insync.replicas: 2 + log.retention.hours: 168 + log.segment.bytes: 1073741824 + authorization: + type: simple + superUsers: + - CN=admin-user + - CN=source-mirror-user + logging: + type: inline + loggers: + rootLogger.level: INFO + metricsConfig: + type: jmxPrometheusExporter + valueFrom: + configMapKeyRef: + name: kafka-metrics-config + key: kafka-metrics-config.yaml + entityOperator: + topicOperator: + resources: + requests: + cpu: 200m + memory: 256Mi + limits: + cpu: 500m + memory: 512Mi + userOperator: + resources: + requests: + cpu: 200m + memory: 256Mi + limits: + cpu: 500m + memory: 512Mi + cruiseControl: + resources: + requests: + cpu: 200m + memory: 256Mi + limits: + cpu: "1" + memory: 1Gi + autoRebalance: + - mode: add-brokers + - mode: remove-brokers + config: + cruise.control.metrics.topic.num.partitions: 1 + cruise.control.metrics.topic.replication.factor: 3 + cruise.control.metrics.topic.min.insync.replicas: 2 + metricsConfig: + type: jmxPrometheusExporter + valueFrom: + configMapKeyRef: + name: kafka-metrics-config + key: cruise-control-metrics-config.yaml + kafkaExporter: + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 256Mi + +--- +apiVersion: kafka.strimzi.io/v1 +kind: KafkaUser +metadata: + name: source-mirror-user + namespace: kafka-source + labels: + strimzi.io/cluster: source +spec: + authentication: + type: tls + authorization: + type: simple + acls: + - resource: + type: topic + name: "*" + patternType: literal + operations: + - Describe + - Read + host: "*" + - resource: + type: group + name: "*" + patternType: literal + operations: + - Describe + - Read + host: "*" diff --git a/examples/scenarios/kafka-mirror/source-cluster/kustomization.yaml b/examples/scenarios/kafka-mirror/source-cluster/kustomization.yaml new file mode 100644 index 0000000..0e43987 --- /dev/null +++ b/examples/scenarios/kafka-mirror/source-cluster/kustomization.yaml @@ -0,0 +1,9 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - namespace.yaml + - metrics-config.yaml + - controllers.yaml + - brokers.yaml + - kafka.yaml diff --git a/examples/scenarios/kafka-mirror/source-cluster/metrics-config.yaml b/examples/scenarios/kafka-mirror/source-cluster/metrics-config.yaml new file mode 100644 index 0000000..131cc0b --- /dev/null +++ b/examples/scenarios/kafka-mirror/source-cluster/metrics-config.yaml @@ -0,0 +1,81 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: kafka-metrics-config + namespace: kafka-source +data: + kafka-metrics-config.yaml: | + lowercaseOutputName: true + rules: + - pattern: kafka.server<>Value + name: kafka_server_$1_$2 + type: GAUGE + labels: + clientId: "$3" + topic: "$4" + partition: "$5" + - pattern: kafka.server<>Value + name: kafka_server_$1_$2 + type: GAUGE + labels: + clientId: "$3" + broker: "$4:$5" + - pattern: kafka.(\w+)<>MeanRate + name: kafka_$1_$2_$3_percent + type: GAUGE + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3_percent + type: GAUGE + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_total + type: COUNTER + labels: + "$4": "$5" + "$6": "$7" + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_total + type: COUNTER + labels: + "$4": "$5" + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_total + type: COUNTER + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3 + type: GAUGE + labels: + "$4": "$5" + "$6": "$7" + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3 + type: GAUGE + labels: + "$4": "$5" + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3 + type: GAUGE + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_count + type: COUNTER + - pattern: "kafka.server<>(.+-total|.+-max):" + name: kafka_server_raftmetrics_$1 + type: COUNTER + - pattern: "kafka.server<>(.+):" + name: kafka_server_raftmetrics_$1 + type: GAUGE + - pattern: "kafka.server<>(.+-total|.+-max):" + name: kafka_server_raftchannelmetrics_$1 + type: COUNTER + - pattern: "kafka.server<>(.+):" + name: kafka_server_raftchannelmetrics_$1 + type: GAUGE + - pattern: "kafka.server<>(.+):" + name: kafka_server_brokermetadatametrics_$1 + type: GAUGE + + cruise-control-metrics-config.yaml: | + lowercaseOutputName: true + rules: + - pattern: "kafka.cruisecontrol<>(\\w+)" + name: "kafka_cruisecontrol_$1_$2" + type: "GAUGE" diff --git a/examples/scenarios/kafka-mirror/source-cluster/namespace.yaml b/examples/scenarios/kafka-mirror/source-cluster/namespace.yaml new file mode 100644 index 0000000..66c73bf --- /dev/null +++ b/examples/scenarios/kafka-mirror/source-cluster/namespace.yaml @@ -0,0 +1,6 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: kafka-source + labels: + name: kafka-source diff --git a/examples/scenarios/kafka-mirror/target-cluster/brokers.yaml b/examples/scenarios/kafka-mirror/target-cluster/brokers.yaml new file mode 100644 index 0000000..de688e7 --- /dev/null +++ b/examples/scenarios/kafka-mirror/target-cluster/brokers.yaml @@ -0,0 +1,34 @@ +apiVersion: kafka.strimzi.io/v1 +kind: KafkaNodePool +metadata: + name: brokers + namespace: kafka-target + labels: + strimzi.io/cluster: target +spec: + replicas: 3 + roles: + - broker + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 10Gi + deleteClaim: false + resources: + requests: + memory: 1Gi + cpu: 500m + limits: + memory: 2Gi + cpu: "1" + template: + pod: + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: DoNotSchedule + labelSelector: + matchLabels: + strimzi.io/pool-name: brokers diff --git a/examples/scenarios/kafka-mirror/target-cluster/controllers.yaml b/examples/scenarios/kafka-mirror/target-cluster/controllers.yaml new file mode 100644 index 0000000..7a34d7d --- /dev/null +++ b/examples/scenarios/kafka-mirror/target-cluster/controllers.yaml @@ -0,0 +1,34 @@ +apiVersion: kafka.strimzi.io/v1 +kind: KafkaNodePool +metadata: + name: controllers + namespace: kafka-target + labels: + strimzi.io/cluster: target +spec: + replicas: 3 + roles: + - controller + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 10Gi + deleteClaim: false + resources: + requests: + memory: 512Mi + cpu: 250m + limits: + memory: 1Gi + cpu: 500m + template: + pod: + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: DoNotSchedule + labelSelector: + matchLabels: + strimzi.io/pool-name: controllers diff --git a/examples/scenarios/kafka-mirror/target-cluster/kafka.yaml b/examples/scenarios/kafka-mirror/target-cluster/kafka.yaml new file mode 100644 index 0000000..d63c1fa --- /dev/null +++ b/examples/scenarios/kafka-mirror/target-cluster/kafka.yaml @@ -0,0 +1,122 @@ +apiVersion: kafka.strimzi.io/v1 +kind: Kafka +metadata: + name: target + namespace: kafka-target +spec: + kafka: + version: 4.2.0 + metadataVersion: "4.2-IV0" + listeners: + - name: plain + port: 9092 + type: internal + tls: false + - name: tls + port: 9093 + type: internal + tls: true + authentication: + type: tls + config: + offsets.topic.replication.factor: 3 + transaction.state.log.replication.factor: 3 + transaction.state.log.min.isr: 2 + default.replication.factor: 3 + min.insync.replicas: 2 + log.retention.hours: 168 + log.segment.bytes: 1073741824 + authorization: + type: simple + superUsers: + - CN=admin-user + - CN=mirror-user + logging: + type: inline + loggers: + rootLogger.level: INFO + metricsConfig: + type: jmxPrometheusExporter + valueFrom: + configMapKeyRef: + name: kafka-metrics-config + key: kafka-metrics-config.yaml + entityOperator: + topicOperator: + resources: + requests: + cpu: 200m + memory: 256Mi + limits: + cpu: 500m + memory: 512Mi + userOperator: + resources: + requests: + cpu: 200m + memory: 256Mi + limits: + cpu: 500m + memory: 512Mi + cruiseControl: + resources: + requests: + cpu: 200m + memory: 256Mi + limits: + cpu: "1" + memory: 1Gi + autoRebalance: + - mode: add-brokers + - mode: remove-brokers + config: + cruise.control.metrics.topic.num.partitions: 1 + cruise.control.metrics.topic.replication.factor: 3 + cruise.control.metrics.topic.min.insync.replicas: 2 + metricsConfig: + type: jmxPrometheusExporter + valueFrom: + configMapKeyRef: + name: kafka-metrics-config + key: cruise-control-metrics-config.yaml + kafkaExporter: + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 256Mi + +--- +apiVersion: kafka.strimzi.io/v1 +kind: KafkaUser +metadata: + name: mirror-user + namespace: kafka-target + labels: + strimzi.io/cluster: target +spec: + authentication: + type: tls + authorization: + type: simple + acls: + - resource: + type: topic + name: "*" + patternType: literal + operations: + - Describe + - Read + - Write + - Create + host: "*" + - resource: + type: group + name: "*" + patternType: literal + operations: + - Describe + - Read + host: "*" diff --git a/examples/scenarios/kafka-mirror/target-cluster/kustomization.yaml b/examples/scenarios/kafka-mirror/target-cluster/kustomization.yaml new file mode 100644 index 0000000..0e43987 --- /dev/null +++ b/examples/scenarios/kafka-mirror/target-cluster/kustomization.yaml @@ -0,0 +1,9 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - namespace.yaml + - metrics-config.yaml + - controllers.yaml + - brokers.yaml + - kafka.yaml diff --git a/examples/scenarios/kafka-mirror/target-cluster/metrics-config.yaml b/examples/scenarios/kafka-mirror/target-cluster/metrics-config.yaml new file mode 100644 index 0000000..a2d1ec8 --- /dev/null +++ b/examples/scenarios/kafka-mirror/target-cluster/metrics-config.yaml @@ -0,0 +1,81 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: kafka-metrics-config + namespace: kafka-target +data: + kafka-metrics-config.yaml: | + lowercaseOutputName: true + rules: + - pattern: kafka.server<>Value + name: kafka_server_$1_$2 + type: GAUGE + labels: + clientId: "$3" + topic: "$4" + partition: "$5" + - pattern: kafka.server<>Value + name: kafka_server_$1_$2 + type: GAUGE + labels: + clientId: "$3" + broker: "$4:$5" + - pattern: kafka.(\w+)<>MeanRate + name: kafka_$1_$2_$3_percent + type: GAUGE + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3_percent + type: GAUGE + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_total + type: COUNTER + labels: + "$4": "$5" + "$6": "$7" + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_total + type: COUNTER + labels: + "$4": "$5" + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_total + type: COUNTER + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3 + type: GAUGE + labels: + "$4": "$5" + "$6": "$7" + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3 + type: GAUGE + labels: + "$4": "$5" + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3 + type: GAUGE + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_count + type: COUNTER + - pattern: "kafka.server<>(.+-total|.+-max):" + name: kafka_server_raftmetrics_$1 + type: COUNTER + - pattern: "kafka.server<>(.+):" + name: kafka_server_raftmetrics_$1 + type: GAUGE + - pattern: "kafka.server<>(.+-total|.+-max):" + name: kafka_server_raftchannelmetrics_$1 + type: COUNTER + - pattern: "kafka.server<>(.+):" + name: kafka_server_raftchannelmetrics_$1 + type: GAUGE + - pattern: "kafka.server<>(.+):" + name: kafka_server_brokermetadatametrics_$1 + type: GAUGE + + cruise-control-metrics-config.yaml: | + lowercaseOutputName: true + rules: + - pattern: "kafka.cruisecontrol<>(\\w+)" + name: "kafka_cruisecontrol_$1_$2" + type: "GAUGE" diff --git a/examples/scenarios/kafka-mirror/target-cluster/namespace.yaml b/examples/scenarios/kafka-mirror/target-cluster/namespace.yaml new file mode 100644 index 0000000..9580fa2 --- /dev/null +++ b/examples/scenarios/kafka-mirror/target-cluster/namespace.yaml @@ -0,0 +1,6 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: kafka-target + labels: + name: kafka-target diff --git a/examples/scenarios/kafka-oauth/README.md b/examples/scenarios/kafka-oauth/README.md new file mode 100644 index 0000000..08d8155 --- /dev/null +++ b/examples/scenarios/kafka-oauth/README.md @@ -0,0 +1,234 @@ +# Kafka with OAuth/OIDC Authentication + +A self-contained scenario deploying Keycloak as the OIDC identity provider and a Kafka cluster with OAuth 2.0 token-based authentication using the Strimzi `type: custom` listener configuration. + +## What Gets Deployed + +### Keycloak (namespace: `keycloak`) + +- **PostgreSQL** database (StatefulSet, 5Gi storage) +- **Keycloak** instance (1 replica, HTTP mode) +- **Realm** `kafka` with: + - Roles: `kafka-admin`, `kafka-producer`, `kafka-consumer` + - Users: `kafka-admin`, `producer-user`, `consumer-user` + - Clients: `kafka-broker` (for Kafka token validation), `kafka-client` (for applications) + +### Kafka (namespace: `kafka-oauth`) + +- **Kafka cluster** (`my-cluster`): KRaft mode, Kafka 4.0.0 +- **KafkaNodePool** `controllers`: 3 replicas, 10Gi storage +- **KafkaNodePool** `brokers`: 3 replicas, 100Gi storage +- **OAuth listener** on port 9093 using `type: custom` with OAUTHBEARER SASL +- **Entity Operator**: Topic Operator + User Operator +- **Cruise Control**: Auto-rebalancing + +## Architecture + +``` +┌──────────────────────┐ ┌──────────────────────┐ +│ keycloak namespace │ │ kafka-oauth ns │ +│ │ │ │ +│ ┌────────────────┐ │ JWKS │ ┌────────────────┐ │ +│ │ Keycloak │◄─┼───────┼──┤ Kafka cluster │ │ +│ │ (HTTP :8080) │ │ │ │ (OAUTHBEARER │ │ +│ └───────┬────────┘ │ │ │ on port 9093) │ │ +│ │ │ │ └────────────────┘ │ +│ ┌───────▼────────┐ │ │ │ +│ │ PostgreSQL │ │ │ │ +│ └────────────────┘ │ │ │ +└──────────────────────┘ └──────────────────────┘ +``` + +## Listeners + +| Name | Port | Type | TLS | Authentication | +|------|------|------|-----|----------------| +| `plain` | 9092 | internal | No | None | +| `tls` | 9093 | internal | Yes | OAuth 2.0 (OAUTHBEARER via `type: custom`) | + +## Prerequisites + +- ArgoCD installed (see [ArgoCD installation](../../argo-cd/)) +- Strimzi or Streams for Apache Kafka operator deployed via ArgoCD (see [strimzi-operator](../../operators/strimzi/)) +- Keycloak or RHBK operator deployed via ArgoCD (see [keycloak-operator](../../operators/keycloak/)) + +## Deploy via ArgoCD + +```bash +kubectl apply -f argocd/application.yaml +``` + +Or deploy all scenarios at once using the [app-of-apps](../../app-of-apps/) ApplicationSet. + +## Verify + +```bash +# Check Keycloak is running +kubectl get pods -n keycloak +kubectl get keycloak -n keycloak + +# Check Kafka is running +kubectl get pods -n kafka-oauth +kubectl get kafka my-cluster -n kafka-oauth +``` + +## Obtain a Token and Produce Messages + +```bash +# Run a producer Job using the Strimzi test-clients image +cat < + org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required + oauth.valid.issuer.uri="http://keycloak-service.keycloak.svc:8080/realms/kafka" + oauth.jwks.endpoint.uri="http://keycloak-service.keycloak.svc:8080/realms/kafka/protocol/openid-connect/certs" + oauth.username.claim="preferred_username" ... + principal.builder.class: io.strimzi.kafka.oauth.server.OAuthKafkaPrincipalBuilder +``` + +The Strimzi OAuth libraries (`io.strimzi.kafka.oauth.*`) are bundled in the Strimzi Kafka images. + +## Realm Users and Clients + +| User/Client | Type | Credentials | Role | +|-------------|------|-------------|------| +| `kafka-admin` | User | `kafka-admin` / `kafka-admin` | Full Kafka access | +| `producer-user` | User | `producer-user` / `producer-user` | Produce to topics | +| `consumer-user` | User | `consumer-user` / `consumer-user` | Consume from topics | +| `kafka-broker` | Client | secret: `kafka-broker-secret` | Kafka broker token validation | +| `kafka-client` | Client | secret: `kafka-client-secret` | Application client credentials | + +## Customization + +- **OIDC provider URLs**: Update `oauthbearer.sasl.jaas.config` in `kafka/kafka.yaml` to point to your provider +- **TLS for Keycloak**: For production, add `spec.http.tlsSecret` to `keycloak/keycloak.yaml` and mount the CA cert in the Kafka pod template (see [Strimzi docs](https://strimzi.io/docs/operators/latest/configuring.html) for volume mount examples) +- **Realm configuration**: Add users, roles, and clients in `keycloak/realm-import.yaml` +- **External access**: Add an Ingress/Route for Keycloak and a `type: route` or `type: loadbalancer` listener on Kafka +- **Production secrets**: Replace the example secrets in `keycloak/database-secret.yaml` and `keycloak/admin-secret.yaml` with proper secret management (e.g., Sealed Secrets, External Secrets Operator) + +## Using with RHBK Instead of Keycloak + +This scenario works identically with Red Hat Build of Keycloak (RHBK). Install the RHBK operator instead of the community Keycloak operator: + +```bash +# Instead of: kubectl apply -f ../../operators/keycloak/overlays/kubernetes/argocd/application.yaml +kubectl apply -f ../../operators/keycloak/overlays/rhbk/argocd/application.yaml +``` + +The Keycloak CR and KeycloakRealmImport CR use the same `k8s.keycloak.org/v2alpha1` API for both products. + +## Compatibility + +These manifests use `kafka.strimzi.io/v1` and `k8s.keycloak.org/v2alpha1` CRDs and work with: +- **Strimzi** 1.0.0+ / **Streams for Apache Kafka** 3.2+ +- **Keycloak** 26.0+ / **Red Hat Build of Keycloak** (RHBK) diff --git a/examples/scenarios/kafka-oauth/argocd/application.yaml b/examples/scenarios/kafka-oauth/argocd/application.yaml new file mode 100644 index 0000000..b48bba9 --- /dev/null +++ b/examples/scenarios/kafka-oauth/argocd/application.yaml @@ -0,0 +1,24 @@ +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: kafka-oauth + namespace: argocd +spec: + project: streamshub + source: + repoURL: https://github.com/streamshub/streamshub-gitops.git + # TODO: switch back to HEAD before merge + targetRevision: argocd-kafka-examples + path: examples/scenarios/kafka-oauth + destination: + server: https://kubernetes.default.svc + namespace: kafka-oauth + syncPolicy: + automated: + prune: false + selfHeal: true + managedNamespaceMetadata: + labels: + argocd.argoproj.io/managed-by: argocd + syncOptions: + - CreateNamespace=true diff --git a/examples/scenarios/kafka-oauth/kafka/brokers.yaml b/examples/scenarios/kafka-oauth/kafka/brokers.yaml new file mode 100644 index 0000000..bfadad4 --- /dev/null +++ b/examples/scenarios/kafka-oauth/kafka/brokers.yaml @@ -0,0 +1,34 @@ +apiVersion: kafka.strimzi.io/v1 +kind: KafkaNodePool +metadata: + name: brokers + namespace: kafka-oauth + labels: + strimzi.io/cluster: my-cluster +spec: + replicas: 3 + roles: + - broker + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 10Gi + deleteClaim: false + resources: + requests: + memory: 1Gi + cpu: 500m + limits: + memory: 2Gi + cpu: "1" + template: + pod: + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: DoNotSchedule + labelSelector: + matchLabels: + strimzi.io/pool-name: brokers diff --git a/examples/scenarios/kafka-oauth/kafka/controllers.yaml b/examples/scenarios/kafka-oauth/kafka/controllers.yaml new file mode 100644 index 0000000..f04a214 --- /dev/null +++ b/examples/scenarios/kafka-oauth/kafka/controllers.yaml @@ -0,0 +1,34 @@ +apiVersion: kafka.strimzi.io/v1 +kind: KafkaNodePool +metadata: + name: controllers + namespace: kafka-oauth + labels: + strimzi.io/cluster: my-cluster +spec: + replicas: 3 + roles: + - controller + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 10Gi + deleteClaim: false + resources: + requests: + memory: 512Mi + cpu: 250m + limits: + memory: 1Gi + cpu: 500m + template: + pod: + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: DoNotSchedule + labelSelector: + matchLabels: + strimzi.io/pool-name: controllers diff --git a/examples/scenarios/kafka-oauth/kafka/kafka.yaml b/examples/scenarios/kafka-oauth/kafka/kafka.yaml new file mode 100644 index 0000000..14bcc40 --- /dev/null +++ b/examples/scenarios/kafka-oauth/kafka/kafka.yaml @@ -0,0 +1,104 @@ +apiVersion: kafka.strimzi.io/v1 +kind: Kafka +metadata: + name: my-cluster + namespace: kafka-oauth +spec: + kafka: + version: 4.2.0 + metadataVersion: "4.2-IV0" + listeners: + - name: plain + port: 9092 + type: internal + tls: false + - name: tls + port: 9093 + type: internal + tls: true + authentication: + type: custom + sasl: true + listenerConfig: + sasl.enabled.mechanisms: OAUTHBEARER + oauthbearer.sasl.server.callback.handler.class: io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler + oauthbearer.sasl.jaas.config: > + org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required + unsecuredLoginStringClaim_sub="admin" + oauth.valid.issuer.uri="http://keycloak-service.keycloak.svc:8080/realms/kafka" + oauth.jwks.endpoint.uri="http://keycloak-service.keycloak.svc:8080/realms/kafka/protocol/openid-connect/certs" + oauth.username.claim="preferred_username" + oauth.check.issuer="true" + oauth.check.access.token.type="true" + oauth.fallback.username.claim="client_id" + oauth.fallback.username.prefix="service-account-"; + config: + principal.builder.class: io.strimzi.kafka.oauth.server.OAuthKafkaPrincipalBuilder + offsets.topic.replication.factor: 3 + transaction.state.log.replication.factor: 3 + transaction.state.log.min.isr: 2 + default.replication.factor: 3 + min.insync.replicas: 2 + log.retention.hours: 168 + log.segment.bytes: 1073741824 + authorization: + type: simple + superUsers: + - CN=admin-user + - service-account-kafka-client + logging: + type: inline + loggers: + rootLogger.level: INFO + metricsConfig: + type: jmxPrometheusExporter + valueFrom: + configMapKeyRef: + name: kafka-metrics-config + key: kafka-metrics-config.yaml + entityOperator: + topicOperator: + resources: + requests: + cpu: 200m + memory: 256Mi + limits: + cpu: 500m + memory: 512Mi + userOperator: + resources: + requests: + cpu: 200m + memory: 256Mi + limits: + cpu: 500m + memory: 512Mi + cruiseControl: + resources: + requests: + cpu: 200m + memory: 256Mi + limits: + cpu: "1" + memory: 1Gi + autoRebalance: + - mode: add-brokers + - mode: remove-brokers + config: + cruise.control.metrics.topic.num.partitions: 1 + cruise.control.metrics.topic.replication.factor: 3 + cruise.control.metrics.topic.min.insync.replicas: 2 + metricsConfig: + type: jmxPrometheusExporter + valueFrom: + configMapKeyRef: + name: kafka-metrics-config + key: cruise-control-metrics-config.yaml + kafkaExporter: + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 256Mi diff --git a/examples/scenarios/kafka-oauth/kafka/kustomization.yaml b/examples/scenarios/kafka-oauth/kafka/kustomization.yaml new file mode 100644 index 0000000..0e43987 --- /dev/null +++ b/examples/scenarios/kafka-oauth/kafka/kustomization.yaml @@ -0,0 +1,9 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - namespace.yaml + - metrics-config.yaml + - controllers.yaml + - brokers.yaml + - kafka.yaml diff --git a/examples/scenarios/kafka-oauth/kafka/metrics-config.yaml b/examples/scenarios/kafka-oauth/kafka/metrics-config.yaml new file mode 100644 index 0000000..1ada4e3 --- /dev/null +++ b/examples/scenarios/kafka-oauth/kafka/metrics-config.yaml @@ -0,0 +1,81 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: kafka-metrics-config + namespace: kafka-oauth +data: + kafka-metrics-config.yaml: | + lowercaseOutputName: true + rules: + - pattern: kafka.server<>Value + name: kafka_server_$1_$2 + type: GAUGE + labels: + clientId: "$3" + topic: "$4" + partition: "$5" + - pattern: kafka.server<>Value + name: kafka_server_$1_$2 + type: GAUGE + labels: + clientId: "$3" + broker: "$4:$5" + - pattern: kafka.(\w+)<>MeanRate + name: kafka_$1_$2_$3_percent + type: GAUGE + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3_percent + type: GAUGE + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_total + type: COUNTER + labels: + "$4": "$5" + "$6": "$7" + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_total + type: COUNTER + labels: + "$4": "$5" + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_total + type: COUNTER + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3 + type: GAUGE + labels: + "$4": "$5" + "$6": "$7" + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3 + type: GAUGE + labels: + "$4": "$5" + - pattern: kafka.(\w+)<>Value + name: kafka_$1_$2_$3 + type: GAUGE + - pattern: kafka.(\w+)<>Count + name: kafka_$1_$2_$3_count + type: COUNTER + - pattern: "kafka.server<>(.+-total|.+-max):" + name: kafka_server_raftmetrics_$1 + type: COUNTER + - pattern: "kafka.server<>(.+):" + name: kafka_server_raftmetrics_$1 + type: GAUGE + - pattern: "kafka.server<>(.+-total|.+-max):" + name: kafka_server_raftchannelmetrics_$1 + type: COUNTER + - pattern: "kafka.server<>(.+):" + name: kafka_server_raftchannelmetrics_$1 + type: GAUGE + - pattern: "kafka.server<>(.+):" + name: kafka_server_brokermetadatametrics_$1 + type: GAUGE + + cruise-control-metrics-config.yaml: | + lowercaseOutputName: true + rules: + - pattern: "kafka.cruisecontrol<>(\\w+)" + name: "kafka_cruisecontrol_$1_$2" + type: "GAUGE" diff --git a/examples/scenarios/kafka-oauth/kafka/namespace.yaml b/examples/scenarios/kafka-oauth/kafka/namespace.yaml new file mode 100644 index 0000000..e9814ee --- /dev/null +++ b/examples/scenarios/kafka-oauth/kafka/namespace.yaml @@ -0,0 +1,6 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: kafka-oauth + labels: + name: kafka-oauth diff --git a/examples/scenarios/kafka-oauth/keycloak/admin-secret.yaml b/examples/scenarios/kafka-oauth/keycloak/admin-secret.yaml new file mode 100644 index 0000000..e533a90 --- /dev/null +++ b/examples/scenarios/kafka-oauth/keycloak/admin-secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +metadata: + name: keycloak-admin + namespace: keycloak +type: Opaque +stringData: + username: admin + password: admin diff --git a/examples/scenarios/kafka-oauth/keycloak/database-secret.yaml b/examples/scenarios/kafka-oauth/keycloak/database-secret.yaml new file mode 100644 index 0000000..60ca294 --- /dev/null +++ b/examples/scenarios/kafka-oauth/keycloak/database-secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +metadata: + name: keycloak-db-secret + namespace: keycloak +type: Opaque +stringData: + username: keycloak + password: changeme diff --git a/examples/scenarios/kafka-oauth/keycloak/database.yaml b/examples/scenarios/kafka-oauth/keycloak/database.yaml new file mode 100644 index 0000000..168d4c4 --- /dev/null +++ b/examples/scenarios/kafka-oauth/keycloak/database.yaml @@ -0,0 +1,65 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: postgresql-db + namespace: keycloak +spec: + serviceName: postgresql-db-service + selector: + matchLabels: + app: postgresql-db + replicas: 1 + template: + metadata: + labels: + app: postgresql-db + spec: + containers: + - name: postgresql-db + image: quay.io/sclorg/postgresql-15-c9s:latest + ports: + - containerPort: 5432 + volumeMounts: + - mountPath: /var/lib/pgsql/data + name: postgres-data + env: + - name: POSTGRESQL_USER + valueFrom: + secretKeyRef: + name: keycloak-db-secret + key: username + - name: POSTGRESQL_PASSWORD + valueFrom: + secretKeyRef: + name: keycloak-db-secret + key: password + - name: POSTGRESQL_DATABASE + value: keycloak + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 250m + memory: 256Mi + volumeClaimTemplates: + - metadata: + name: postgres-data + spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 5Gi +--- +apiVersion: v1 +kind: Service +metadata: + name: postgres-db + namespace: keycloak +spec: + selector: + app: postgresql-db + ports: + - port: 5432 + targetPort: 5432 diff --git a/examples/scenarios/kafka-oauth/keycloak/keycloak.yaml b/examples/scenarios/kafka-oauth/keycloak/keycloak.yaml new file mode 100644 index 0000000..e348bd6 --- /dev/null +++ b/examples/scenarios/kafka-oauth/keycloak/keycloak.yaml @@ -0,0 +1,34 @@ +apiVersion: k8s.keycloak.org/v2alpha1 +kind: Keycloak +metadata: + name: keycloak + namespace: keycloak +spec: + instances: 1 + resources: + requests: + cpu: 250m + memory: 256Mi + limits: + cpu: 500m + memory: 512Mi + bootstrapAdmin: + user: + secret: keycloak-admin + db: + vendor: postgres + host: postgres-db + usernameSecret: + name: keycloak-db-secret + key: username + passwordSecret: + name: keycloak-db-secret + key: password + http: + httpEnabled: true + hostname: + hostname: http://keycloak-service.keycloak.svc:8080 + strict: false + backchannelDynamic: true + proxy: + headers: xforwarded diff --git a/examples/scenarios/kafka-oauth/keycloak/kustomization.yaml b/examples/scenarios/kafka-oauth/keycloak/kustomization.yaml new file mode 100644 index 0000000..dd8370c --- /dev/null +++ b/examples/scenarios/kafka-oauth/keycloak/kustomization.yaml @@ -0,0 +1,10 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - namespace.yaml + - database-secret.yaml + - admin-secret.yaml + - database.yaml + - keycloak.yaml + - realm-import.yaml diff --git a/examples/scenarios/kafka-oauth/keycloak/namespace.yaml b/examples/scenarios/kafka-oauth/keycloak/namespace.yaml new file mode 100644 index 0000000..c2216c7 --- /dev/null +++ b/examples/scenarios/kafka-oauth/keycloak/namespace.yaml @@ -0,0 +1,6 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: keycloak + labels: + name: keycloak diff --git a/examples/scenarios/kafka-oauth/keycloak/realm-import.yaml b/examples/scenarios/kafka-oauth/keycloak/realm-import.yaml new file mode 100644 index 0000000..5608699 --- /dev/null +++ b/examples/scenarios/kafka-oauth/keycloak/realm-import.yaml @@ -0,0 +1,81 @@ +apiVersion: k8s.keycloak.org/v2alpha1 +kind: KeycloakRealmImport +metadata: + name: kafka-realm-import + namespace: keycloak +spec: + keycloakCRName: keycloak + realm: + realm: kafka + enabled: true + sslRequired: external + + roles: + realm: + - name: kafka-admin + description: Full access to all Kafka resources + - name: kafka-producer + description: Can produce messages to topics + - name: kafka-consumer + description: Can consume messages from topics + + users: + - username: kafka-admin + enabled: true + credentials: + - type: password + value: kafka-admin + realmRoles: + - kafka-admin + - offline_access + + - username: producer-user + enabled: true + credentials: + - type: password + value: producer-user + realmRoles: + - kafka-producer + - offline_access + + - username: consumer-user + enabled: true + credentials: + - type: password + value: consumer-user + realmRoles: + - kafka-consumer + - offline_access + + - username: service-account-kafka-client + enabled: true + serviceAccountClientId: kafka-client + realmRoles: + - kafka-producer + - kafka-consumer + - offline_access + + clients: + - clientId: kafka-broker + enabled: true + clientAuthenticatorType: client-secret + secret: kafka-broker-secret + bearerOnly: false + standardFlowEnabled: false + implicitFlowEnabled: false + directAccessGrantsEnabled: false + serviceAccountsEnabled: true + publicClient: false + fullScopeAllowed: true + + - clientId: kafka-client + enabled: true + clientAuthenticatorType: client-secret + secret: kafka-client-secret + bearerOnly: false + standardFlowEnabled: false + implicitFlowEnabled: false + directAccessGrantsEnabled: true + serviceAccountsEnabled: true + publicClient: false + fullScopeAllowed: true diff --git a/examples/scenarios/kafka-oauth/kustomization.yaml b/examples/scenarios/kafka-oauth/kustomization.yaml new file mode 100644 index 0000000..9e5cf2e --- /dev/null +++ b/examples/scenarios/kafka-oauth/kustomization.yaml @@ -0,0 +1,6 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: + - keycloak + - kafka