diff --git a/docs/getstarted/ai-controller-tutorial.md b/docs/getstarted/ai-controller-tutorial.md new file mode 100644 index 00000000..363d14ab --- /dev/null +++ b/docs/getstarted/ai-controller-tutorial.md @@ -0,0 +1,593 @@ +--- +title: Build an AI controller with Crossplane +description: Deploy a WatchOperation that uses a local LLM to enforce platform policy. +weight: {weight} +validation: + type: walkthrough + owner: docs@upbound.io + environment: local-upbound + timeout: 45m + variables: + HOST_IP: "" +--- + +In this tutorial, you run a Kubernetes controller with reconciliation logic in +plain English. A Crossplane `WatchOperation` watches an nginx `Deployment` and +calls a local LLM whenever it changes. The LLM reads the +current state, applies the rule in its `systemPrompt`, and returns a corrected +manifest. Crossplane applies it. + +By the end of this tutorial, you can: + +- Deploy a `WatchOperation` that calls a local LLM on every resource change +- Watch the controller detect and correct a policy violation automatically +- Update the enforcement rule by editing a single field in YAML + +The model in this tutorial is `qwen3.5:latest`, running locally via Ollama. +No cloud API key required. + +## Prerequisites + +Install the following before starting: + +- [Docker][docker-install], running locally +- [`kubectl`][kubectl-install] +- [`kind`][kind-install] + +### Install the up CLI + +This tutorial requires up CLI v0.44.3. + +```shell +curl -sL "https://cli.upbound.io" | VERSION=v0.44.3 sh +``` + +Move the binary into your `PATH`: + +```shell +sudo mv up /usr/local/bin/ +``` + +If you don't have `sudo` access: + +```shell +mkdir -p ~/.local/bin && mv up ~/.local/bin/ +export PATH="$HOME/.local/bin:$PATH" +``` + +Add the `export` line to your shell profile (`~/.bashrc`, `~/.zshrc`) to make it permanent. + +Verify the installation: + +```shell +up version +``` + +## Create the project + +### Initialize the project + +Scaffold a new project with `up project init`. This creates the +`english-controller/` directory with a valid `upbound.yaml` and the standard +project layout (`apis/`, `functions/`, `examples/`, `tests/`): + +```bash +up project init --scratch english-controller +cd english-controller +``` + +All commands from this point run from inside the `english-controller` directory. + +### Add function dependencies + +The controller uses two Crossplane functions: `function-auto-ready` so the +`WatchOperation` reports ready status, and `function-openai` to call the LLM. +Add them as project dependencies: + +```bash +up dependency add 'xpkg.upbound.io/crossplane-contrib/function-auto-ready' +up dependency add 'xpkg.upbound.io/upbound/function-openai:v0.3.0' +``` + +`up dependency add` records each dependency in `upbound.yaml`. + + +### Create the WatchOperation + + +The `WatchOperation` is the controller. It watches the nginx `Deployment` and +calls `upbound-function-openai` whenever it changes. The function sends the +current resource state to the LLM along with the `systemPrompt` rule. The LLM +returns a corrected manifest. Crossplane applies it. + +```bash +mkdir -p operations/replicas +cat > operations/replicas/operation.yaml <<'EOF' +apiVersion: ops.crossplane.io/v1alpha1 +kind: WatchOperation +metadata: + name: replicas +spec: + concurrencyPolicy: Forbid + successfulHistoryLimit: 3 + failedHistoryLimit: 1 + operationTemplate: + spec: + mode: Pipeline + pipeline: + - functionRef: + name: upbound-function-openai + input: + apiVersion: openai.fn.upbound.io/v1alpha1 + kind: Prompt + systemPrompt: |- + You are a Kubernetes controller. Output raw YAML only — no markdown, no code fences, no backticks, no explanations. + + Rule: if spec.replicas is less than 3, set it to 3. Otherwise keep it unchanged. + userPrompt: |- + Inspect the nginx Deployment and output the corrected manifest. + Output only the Deployment manifest with the correct spec.replicas value. + Include apiVersion, kind, metadata (name: nginx, namespace: default), and spec. + Start your response with 'apiVersion:' + step: deployment-analysis + credentials: + - name: gpt + source: Secret + secretRef: + namespace: crossplane-system + name: gpt + watch: + apiVersion: apps/v1 + kind: Deployment + namespace: default +EOF +``` + +:::info +The explicit output instructions in `userPrompt` are necessary for `qwen3.5:latest`. +With a larger model like `gpt-4o`, the `systemPrompt` can contain just the rule +itself, without format guidance. +::: + +### Create the nginx deployment + +Create the starting state of 1 replica. The AI controller corrects this. + +```bash +mkdir -p examples +cat > examples/deployment.yaml <<'EOF' +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app: nginx + name: nginx + namespace: default +spec: + replicas: 1 + selector: + matchLabels: + app: nginx + template: + metadata: + labels: + app: nginx + spec: + containers: + - image: nginx + name: nginx +EOF +``` + +## Set up Ollama + +Ollama runs the LLM locally. Install it and pull the model before starting the +cluster. The model is ~1 GB. + +### Install Ollama + +```shell +curl -fsSL https://ollama.com/install.sh | sh +``` + +If the install script doesn't work for your OS, download directly from +[ollama.com/download][ollama-download]. + +### Start Ollama + +On Linux, the install script registers a `systemd` service that starts Ollama +automatically. On macOS, start it manually in a separate terminal if +`ollama list` returns "could not connect to ollama server": + + + +```shell +ollama serve +``` + +### Pull the model + +```shell +ollama pull qwen3.5:latest +``` + +Confirm the model downloaded: + +```shell +ollama list +``` + +You should see `qwen3.5:latest` in the output. + +## Start the project + +Run from inside the `english-controller` directory: + +```bash +up project run --local --control-plane-version=2.1.4-up.2 +``` + +This creates a kind cluster, installs UXP, and deploys the function packages +declared in `upbound.yaml`. It exits when the cluster is ready. + +:::warning +`up project run --local` may print `traces export: context deadline exceeded`. +This message reports a telemetry timeout and doesn't affect the cluster setup. +::: + +### Configure kubectl + +```bash +kind get kubeconfig --name up-english-controller > ~/.kube/config +``` + +:::warning +This overwrites your existing `~/.kube/config`. To preserve existing contexts, +merge instead: + +```bash +kind get kubeconfig --name up-english-controller > ~/.kube/config-upbound +KUBECONFIG=~/.kube/config:~/.kube/config-upbound \ + kubectl config view --flatten > ~/.kube/config.merged +mv ~/.kube/config.merged ~/.kube/config +``` +::: + +Verify the connection: + +```bash +kubectl get nodes +``` + +### Wire Ollama into the cluster + +The kind cluster's pods need to reach Ollama running on your host. Create a +Kubernetes `Service` and `Endpoints` that route cluster traffic to your machine. + +1. Get the host's IPv4 address as seen from inside the cluster. This command + works on Linux, macOS, and Windows: + + ```bash + HOST_IP=$(docker run --rm --add-host=host.docker.internal:host-gateway alpine \ + getent hosts host.docker.internal | awk '$1 ~ /^[0-9.]+$/ {print $1; exit}') + echo "Host IP: $HOST_IP" + ``` + +2. Create the `ollama` namespace and register Ollama as a cluster service: + + ```bash + kubectl create namespace ollama --dry-run=client -o yaml | kubectl apply -f - + + kubectl apply -f - < +### Apply the WatchOperation + + +Crossplane Operations are Kubernetes objects that run logic against your cluster +on a trigger. + +| Kind | Trigger | +|------|---------| +| `WatchOperation` | Every time a specific resource changes | +| `CronOperation` | On a schedule | +| `Operation` | Once, on demand | + +This tutorial uses a `WatchOperation`. It watches the nginx `Deployment` and +calls an LLM every time it changes. + +```bash +kubectl apply -f operations/replicas/operation.yaml +``` + +The `WatchOperation` fires immediately because the `Deployment` already exists. + +### Watch it act + +```bash +kubectl get deployment nginx -w +``` + +Within 60 to 90 seconds, replicas jump from 1 to 3. The LLM read the `Deployment`, +decided it violated the rule, and patched it. + +Press Ctrl+C when replicas reach 3. + +### Inspect the operation records + +Each `Operation` object is a record of a single invocation. + +```bash +kubectl get watchoperations +kubectl get operations +``` + +Pick one of the operation names and describe it: + +```bash +kubectl describe operation +``` + +The `Events` section shows the exact YAML the model returned and what the +controller applied. + +## Watch it heal + +The `WatchOperation` re-evaluates on every change. If anything modifies the +`Deployment`, the rule re-applies. + +### Scale down nginx + +```bash +kubectl scale deployment nginx --replicas=1 +``` + +### Watch the controller heal it + +```bash +kubectl get deployment nginx -w +``` + +Within 30 to 60 seconds, replicas climb back to 3. The `WatchOperation` fired +because the `Deployment` changed. The LLM saw 1 replica, decided it violated +the rule, and patched it. + +Press Ctrl+C when replicas are back at 3. + +### See what fired + +```bash +kubectl get watchoperations +kubectl get operations +``` + +Each entry is a record of what fired, what the model decided, and what changed. +The most recent one captured the scale-down event and the correction. + +### See where the model runs + +```bash +kubectl get secret gpt -n crossplane-system -o yaml +``` + +`OPENAI_BASE_URL` points to Ollama's OpenAI-compatible API running locally on +your machine, so no data leaves the machine. Change that URL to +`https://api.openai.com/v1` and update `OPENAI_MODEL`, and the +`WatchOperation` works identically. + +## Change the rules + +To change the policy, edit `systemPrompt` and re-apply. + +### Update the minimum replicas to 5 + +Open `operations/replicas/operation.yaml`. Find the `systemPrompt` and change +the rule line from: + +```text +Rule: if spec.replicas is less than 3, set it to 3. Otherwise keep it unchanged. +``` + +To: + +```text +Rule: if spec.replicas is less than 5, set it to 5. Otherwise keep it unchanged. +``` + +Edit the file directly: + +**macOS:** + +```bash +sed -i '' 's/less than 3, set it to 3/less than 5, set it to 5/' \ + operations/replicas/operation.yaml +``` + +**Linux:** + +```bash +sed -i 's/less than 3, set it to 3/less than 5, set it to 5/' \ + operations/replicas/operation.yaml +``` + +:::info +With `qwen3.5:latest`, keep the full `userPrompt` output instructions in place. +The explicit YAML template keeps the local model's output reliable. With a +larger model like `gpt-4o`, you can remove the `userPrompt` entirely and keep +only the rule in `systemPrompt`. +::: + +### Apply the updated operation + +```bash +kubectl apply -f operations/replicas/operation.yaml +``` + +### Trigger and observe + +Scale nginx down to 1: + +```bash +kubectl scale deployment nginx --replicas=1 +``` + +Watch the updated rule enforce 5 replicas: + +```bash +kubectl get deployment nginx -w +``` + +This takes 30 to 45 seconds. Press Ctrl+C when you see 5 ready replicas. + +### Verify + +```bash +kubectl get watchoperations +kubectl get operations +``` + +:::tip +Try adding a conditional rule to the `systemPrompt`: + +``` +If the deployment name contains 'prod', require at least 5 replicas. +Otherwise, require at least 2. +``` + +The model interprets natural language conditions the same way it interprets +numeric rules. +::: + +## Clean up + +Delete the demo resources: + +```bash +kubectl delete watchoperation replicas +kubectl delete operations --all +kubectl delete deployment nginx +``` + +Delete the cluster: + +```bash +kind delete cluster --name up-english-controller +``` + +## Next steps + +In this tutorial, you: + +- Created a Crossplane project with a `WatchOperation` and a KCL function +- Deployed a controller that calls a local LLM on every `Deployment` change +- Watched the controller detect and correct a replica count violation +- Updated the enforcement policy by editing a single field in YAML + +Continue with: + +- [WatchOperations reference][watchops-ref]: triggers, concurrency, history limits, and output handling +- [CronOperations reference][cronops-ref]: schedule-driven operations +- [Composition functions][fn-docs]: build custom logic for any resource +- [Provider authentication][auth-docs]: connect providers to your own cloud account +- [Upbound Marketplace][marketplace]: functions and providers for AWS, Azure, GCP, and more + +[docker-install]: https://docs.docker.com/get-docker/ +[kubectl-install]: https://kubernetes.io/docs/tasks/tools/ +[kind-install]: https://kind.sigs.k8s.io/docs/user/quick-start/#installation +[ollama-download]: https://ollama.com/download +[up-cli-releases]: https://github.com/upbound/up/releases +[uxp-releases]: /reference/release-notes/ +[watchops-ref]: /manuals/crossplane/operations/watch/ +[cronops-ref]: /manuals/crossplane/operations/cron/ +[fn-docs]: /manuals/cli/howtos/compositions/ +[auth-docs]: /manuals/packages/providers/authentication/ +[marketplace]: https://marketplace.upbound.io/ diff --git a/docs/getstarted/ai-database-scaling-tutorial.md b/docs/getstarted/ai-database-scaling-tutorial.md new file mode 100644 index 00000000..60bc1815 --- /dev/null +++ b/docs/getstarted/ai-database-scaling-tutorial.md @@ -0,0 +1,508 @@ +--- +title: AI-driven database scaling with Crossplane +description: Deploy an AI controller that reads live RDS metrics and scales a database automatically. +weight: {weight} +validation: + type: walkthrough + owner: docs@upbound.io + environment: local-upbound + timeout: 60m + variables: + AWS_ACCESS_KEY_ID: "" + AWS_SECRET_ACCESS_KEY: "" + ANTHROPIC_API_KEY: "" +--- + +In this tutorial, you deploy an AI controller that manages an AWS RDS database. +A `CronOperation` runs every minute. It reads live CloudWatch metrics from the +database object, calls Claude, and decides whether to scale. If it scales, it +writes its reasoning back to the object as an annotation. + +By the end of this tutorial, you can: + +- See live CloudWatch metrics surfaced directly on a Crossplane `SQLInstance` object +- Deploy an AI scaling controller with a single `kubectl apply` +- Read the model's reasoning from the Kubernetes object it acted on +- Trigger a load test and watch the AI decide to scale up in real time + +## Prerequisites + +Install the following tools before starting: + +- [`kubectl`][kubectl-install] +- [AWS CLI][aws-cli], configured with credentials that can create VPCs and RDS instances +- [kind][kind] +- An [Anthropic API key][anthropic-console] with access to Claude + +### Install the up CLI + +```shell +curl -sL "https://cli.upbound.io" | sh +sudo mv up /usr/local/bin/ +``` + +If you don't have `sudo` access: + +```shell +mkdir -p ~/.local/bin && mv up ~/.local/bin/ +export PATH="$HOME/.local/bin:$PATH" +``` + +Add the `export` line to your shell profile (`~/.bashrc`, `~/.zshrc`) to make it permanent. + + +### Install mysqlslap + + +The load test in this tutorial uses `mysqlslap`, which ships with the MySQL +client tools. + +**macOS:** + +```shell +brew install mysql-client +export PATH="$(brew --prefix mysql-client)/bin:$PATH" +``` + +**Linux (Debian/Ubuntu):** + +```shell +apt-get install -y mysql-client +``` + +## Clone the project + +```bash +git clone https://github.com/upbound/configuration-aws-database-ai demo +cd demo +``` + +All commands from this point run from inside the `demo` directory. + +## Configure credentials + +Export your AWS credentials and Anthropic API key. The setup steps below use +these values to create Kubernetes secrets. + +```bash +export AWS_ACCESS_KEY_ID= +export AWS_SECRET_ACCESS_KEY= +export ANTHROPIC_API_KEY= +``` + +## Start the project + +Open a dedicated terminal and run from inside the `demo` directory: + +```bash +up project run --local --ingress +``` + +This command: + +- Creates a kind cluster +- Installs UXP +- Builds and deploys the composition functions (`function-rds-metrics` and `function-claude`) +- Installs the AWS providers declared in `upbound.yaml` +- Applies the XRDs from `apis/` +- Installs an ingress controller for the UXP console + +Startup takes several minutes. The command exits when the cluster is ready. + +:::warning +`up project run --local` may print `traces export: context deadline exceeded`. +This message reports a telemetry timeout and doesn't affect the cluster setup. +::: + +### Configure kubectl + +In your second terminal, point kubectl at the new cluster. `up project run --local` +names the cluster after the project directory: + +```bash +CLUSTER_NAME=$(kind get clusters | grep "^up-" | head -1) +kind get kubeconfig --name "${CLUSTER_NAME}" > ~/.kube/config +``` + +:::warning +This overwrites your existing `~/.kube/config`. To preserve existing contexts, +merge instead: + +```bash +kind get kubeconfig --name "${CLUSTER_NAME}" > ~/.kube/config-upbound +KUBECONFIG=~/.kube/config:~/.kube/config-upbound \ + kubectl config view --flatten > ~/.kube/config.merged +mv ~/.kube/config.merged ~/.kube/config +``` +::: + +Verify the connection: + +```bash +kubectl get nodes +``` + +### Create the namespace and apply credentials + +1. Create the `database-team` namespace: + + ```bash + kubectl apply -f examples/ns-database-team.yaml + ``` + +2. Create the AWS credentials secret. The `ProviderConfig` and the + `function-rds-metrics` function both read from this secret: + + ```bash + kubectl create secret generic aws-creds \ + --namespace database-team \ + --from-literal=credentials="$(printf '[default]\naws_access_key_id = %s\naws_secret_access_key = %s\n' \ + "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY")" \ + --dry-run=client -o yaml | kubectl apply -f - + + kubectl create secret generic aws-creds \ + --namespace crossplane-system \ + --from-literal=credentials="$(printf '[default]\naws_access_key_id = %s\naws_secret_access_key = %s\n' \ + "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY")" \ + --dry-run=client -o yaml | kubectl apply -f - + ``` + +3. Create the Anthropic API key secret used by `function-claude`: + + ```bash + kubectl create secret generic claude \ + --namespace crossplane-system \ + --from-literal=ANTHROPIC_API_KEY="${ANTHROPIC_API_KEY}" \ + --dry-run=client -o yaml | kubectl apply -f - + ``` + +### Verify providers and functions + +Wait for both AWS providers and both functions to become healthy: + +```bash +kubectl get providers +kubectl get functions +``` + +All four should show `HEALTHY: True` before continuing. + +:::warning +If `kubectl get providers` or `kubectl get functions` returns **No resources found**, +`up project run --local` didn't complete. Delete the cluster and restart from +[Start the project](#start-the-project). +::: + +### Apply the ProviderConfig + +```bash +kubectl apply -f examples/providerconfig-aws-static.yaml +``` + +### Provision the network + +```bash +kubectl apply -f examples/network-rds-metrics.yaml +``` + +Wait for the network composite resource to become ready (~5 minutes): + +```bash +kubectl get network rds-metrics-database-ai-scale -n database-team -w +``` + +Press Ctrl+C once it shows `READY: True`. + +### Provision the database + +```bash +kubectl apply -f examples/mariadb-xr-rds-metrics.yaml +``` + +RDS provisioning takes 10 to 15 minutes. Watch the status: + +```bash +kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team -w +``` + +Press Ctrl+C once it shows `READY: True` before continuing. + +:::info +While you wait, the `function-rds-metrics` composition step is already +collecting CloudWatch data and writing it onto the object. By the time the +database is ready, `status.performanceMetrics` contains live data. +::: + +### Access the UXP console + +1. Enable the web UI: + + ```bash + up uxp web-ui enable + ``` + +2. In a new terminal, port-forward to the service: + + ```bash + kubectl port-forward -n crossplane-system svc/webui 8080:80 + ``` + +3. Open `http://localhost:8080` in your browser. + +## Meet the database + +An RDS MariaDB instance is running on AWS, managed by Crossplane. Before +wiring the AI into the loop, explore what the system already knows. + +### See the database object + +```bash +kubectl get sqlinstance -n database-team +``` + +You should see `rds-metrics-database-ai-mysql` with `READY: True`. That's a +real AWS RDS instance, managed as a Kubernetes object. + +In the UXP console, click **View all Composite Resources**. You'll see +`rds-metrics-database-ai-mysql` listed. Click **Relationship View** to see +the resources Crossplane provisioned. + +### Verify the AWS resource + +In the [AWS Console, RDS in `us-east-1`][aws-rds], find +`rds-metrics-database-ai-mysql`. + +### Find the performance metrics + +```bash +kubectl describe sqlinstance rds-metrics-database-ai-mysql -n database-team +``` + +Find the `status.performanceMetrics` block. This block contains live +CloudWatch data such as CPU utilization, active connections, and free storage. +`function-rds-metrics` collects this data and writes it into the object. The +AI reads only this block and never queries CloudWatch directly. + +Or fetch just the metrics: + +```bash +kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.status.performanceMetrics}' | jq . +``` + +### Open the controller + +Open `operations/rds-intelligent-scaling-cron/operation.yaml` in your editor. + +That file is the entire scaling controller. The `systemPrompt` defines the +scaling logic, including thresholds, instance class progression, and cooldown. + +### Apply the controller + +```bash +kubectl apply -f operations/rds-intelligent-scaling-cron/operation.yaml +``` + +### Watch the first decision + +```bash +kubectl get cronoperation +``` + +The `CronOperation` takes 30 to 45 seconds to start. Once it's running, watch for the first operation: + +```bash +kubectl get operations -w +``` + +Wait until an operation shows `SUCCEEDED: True`, then press Ctrl+C and describe it: + +```bash +kubectl describe operation +``` + +The `Events` section shows the AI's reasoning and decision. + +Then check the annotation written back to the database object: + +```bash +kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.metadata.annotations}' | jq . +``` + +In the UXP console, navigate to `rds-metrics-database-ai-mysql` and open the +**YAML** tab. The `intelligent-scaling/last-scaled-decision` annotation +contains the model's last decision. + +## Watch the controller idle + +The `CronOperation` runs every minute. CPU is low, so watch what the AI decides +when there's nothing to do. + +### Watch operations fire + +```bash +kubectl get operations -w +``` + +A new operation appears roughly every minute. Press Ctrl+C after a few have run. + +In the UXP console, select **Operations** in the left navigation to see the +same list visually. + +### Read a decision + +Pick one of the operation names and describe it: + +```bash +kubectl describe operation +``` + +Look at the `Events` section. At low CPU, the AI decides to hold. The cooldown +logic is also in the prompt, so it doesn't flip the instance class every minute +even if thresholds are crossed. + +### See the current metrics + +```bash +kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.status.performanceMetrics}' | jq . +``` + +This is the same data the AI reads before making a decision. + +### See the current instance class + +```bash +kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.spec.parameters.instanceClass}' +``` + +It's `db.t3.micro`. + +You can also confirm the current instance type in the [AWS Console, RDS in +us-east-1][aws-rds]. + +## Trigger a scale + +Run a load test that drives CPU above the scaling threshold so the AI decides +to act. + +### Confirm the starting instance class + +```bash +kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.spec.parameters.instanceClass}' +``` + +It should be `db.t3.micro`. + +### Run the load test + +In a second terminal, run the load test from inside the `demo` directory: + +```bash +bash perf-scale-demo.sh +``` + +The script sends CPU-intensive queries to the database for 5 to 10 minutes. +If it finishes without triggering a scale, run it again. + +### Watch CPU climb + +In your first terminal, watch the metrics update every 10 seconds: + +```bash +watch -n 10 "kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.status.performanceMetrics.metrics}' | jq ." +``` + +### Watch the controller fire + +Press Ctrl+C to exit the watch command, then: + +```bash +kubectl get operations -w +``` + +When CPU crosses the threshold (~60%), the next `CronOperation` decides to +scale up. Press Ctrl+C once you see a new operation start. + +### See the scale event + +Check the instance class: + +```bash +kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.spec.parameters.instanceClass}' +``` + +It should now be `db.t3.small`. Check the reasoning: + +```bash +kubectl get sqlinstance rds-metrics-database-ai-mysql -n database-team \ + -o jsonpath='{.metadata.annotations.intelligent-scaling/last-scaled-decision}' +``` + +In the [AWS Console, RDS in us-east-1][aws-rds], refresh the database list. +The instance class change is in progress, and RDS is modifying the live +database. + +## Clean up + +Delete the composite resources. Crossplane deletes all composed AWS resources +(VPC, subnets, RDS instance) before removing the composite resources. + +```bash +kubectl delete sqlinstance rds-metrics-database-ai-mysql -n database-team +kubectl delete network rds-metrics-database-ai-scale -n database-team +``` + +RDS deletion takes 5 to 10 minutes. Wait until the `sqlinstance` is fully removed: + +```bash +kubectl get sqlinstance -n database-team -w +``` + +Once it's gone, delete the `CronOperation` and its history: + +```bash +kubectl delete cronoperation rds-intelligent-scaling-cron +kubectl delete operations --all +``` + +Delete the cluster: + +```bash +CLUSTER_NAME=$(kind get clusters | grep "^up-" | head -1) +kind delete cluster --name "${CLUSTER_NAME}" +``` + +## Next steps + +In this tutorial, you: + +- Provisioned a real AWS RDS instance managed as a Crossplane `SQLInstance` +- Observed live CloudWatch metrics surfaced directly on the Kubernetes object +- Deployed an AI scaling controller with a single `kubectl apply` +- Read the model's reasoning from the annotation it wrote back to the object +- Ran a load test and watched the AI scale the database automatically + +Continue with: + +- [CronOperations reference][cronops-ref]: schedules, history limits, concurrency +- [WatchOperations reference][watchops-ref]: event-driven operations +- [Composition functions][fn-docs]: build custom logic for any resource +- [Provider authentication][auth-docs]: connect providers to your own cloud account +- [Upbound Marketplace][marketplace]: providers and functions for AWS, Azure, GCP, and more + +[kubectl-install]: https://kubernetes.io/docs/tasks/tools/ +[aws-cli]: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html +[kind]: https://kind.sigs.k8s.io/docs/user/quick-start/#installation +[anthropic-console]: https://console.anthropic.com/ +[aws-rds]: https://us-east-1.console.aws.amazon.com/rds/home?region=us-east-1#databases: +[cronops-ref]: /manuals/crossplane/operations/cron/ +[watchops-ref]: /manuals/crossplane/operations/watch/ +[fn-docs]: /manuals/cli/howtos/compositions/ +[auth-docs]: /manuals/packages/providers/authentication/ +[marketplace]: https://marketplace.upbound.io/ diff --git a/docs/getstarted/platform-tutorial.md b/docs/getstarted/platform-tutorial.md new file mode 100644 index 00000000..ba5678e0 --- /dev/null +++ b/docs/getstarted/platform-tutorial.md @@ -0,0 +1,1104 @@ +--- +title: Build a platform with Upbound +description: Deploy a real app with a cloud database, observe drift detection, enforce policies, and change infrastructure live, all from a single control plane. +weight: {weight} +validation: + type: walkthrough + owner: docs@upbound.io + environment: local-upbound + timeout: 30m + variables: + AWS_ACCESS_KEY_ID: "" + AWS_SECRET_ACCESS_KEY: "" +--- + +In this tutorial, you deploy an application with a PostgreSQL database on AWS. +You use Upbound Crossplane to manage resources, enforce security policy, and +change infrastructure. + +By the end of this tutorial, you can: + +- Deploy a composite resource that creates multiple AWS resources from a single manifest +- Explore the providers and ProviderConfigs that connect your platform to AWS +- Trigger drift detection and watch Crossplane correct an out-of-band change +- Block non-compliant requests with Kyverno before they reach Crossplane +- Update live infrastructure by changing desired state + +## Prerequisites + +Install the following tools before starting: + +- [`kubectl`][kubectl-install] +- [AWS CLI][aws-cli], configured with credentials for an account where you can create VPCs, IAM roles, and RDS instances +- [kind][kind] + +### Install the up CLI + +```shell +curl -sL "https://cli.upbound.io" | sh +``` + +Move the binary into your `PATH`: + +```shell +sudo mv up /usr/local/bin/ +``` + +If you don't have `sudo` access: + +```shell +mkdir -p ~/.local/bin && mv up ~/.local/bin/ +export PATH="$HOME/.local/bin:$PATH" +``` + +Add the `export` line to your shell profile (`~/.bashrc`, `~/.zshrc`) to make it permanent. + +## Set up the project + +### Initialize the project + +Scaffold a new project with `up project init`. This creates the `app-w-db/` +directory with a valid `upbound.yaml` and the standard project layout +(`apis/`, `functions/`, `examples/`, `tests/`): + +```bash +up project init --scratch app-w-db +cd app-w-db +``` + +All commands from this point run from inside the `app-w-db` directory. + +### Add provider and function dependencies + +The platform composes AWS resources and uses `function-auto-ready` so composite +resources report ready status. Add them as project dependencies: + +```bash +up dependency add 'xpkg.upbound.io/upbound/provider-family-aws:v2.4.0' +up dependency add 'xpkg.upbound.io/upbound/provider-aws-iam:v2.4.0' +up dependency add 'xpkg.upbound.io/upbound/provider-aws-rds:v2.4.0' +up dependency add 'xpkg.upbound.io/upbound/provider-aws-ec2:v2.4.0' +up dependency add 'xpkg.upbound.io/crossplane-contrib/function-auto-ready:v0.6.1' +``` + +`up dependency add` records each dependency in `upbound.yaml`. + +### Define the platform APIs + +The platform exposes two APIs: `AppWDB` (a basic app with a database) and +`AppWDBSecure` (the same API with an optional security context, used later for +policy enforcement). + +Create the `AppWDB` XRD: + +```bash +mkdir -p apis/appwdb +cat > apis/appwdb/definition.yaml <<'EOF' +apiVersion: apiextensions.crossplane.io/v2 +kind: CompositeResourceDefinition +metadata: + name: appwdbs.demo.upbound.io +spec: + group: demo.upbound.io + names: + categories: + - crossplane + kind: AppWDB + plural: appwdbs + scope: Namespaced + versions: + - name: v1alpha1 + referenceable: true + schema: + openAPIV3Schema: + description: AppWDB is the Schema for the AppWDB API. + properties: + spec: + description: AppWDBSpec defines the desired state of AppWDB. + type: object + properties: + parameters: + type: object + description: AppWDB configuration parameters + properties: + replicas: + type: integer + default: 2 + description: Number of app replicas + dbSize: + type: string + default: db.t3.micro + enum: + - db.t3.micro + - db.t3.small + - db.t3.medium + description: RDS instance class + region: + type: string + default: us-east-1 + description: AWS region + required: + - parameters + status: + description: AppWDBStatus defines the observed state of AppWDB. + type: object + required: + - spec + type: object + served: true +EOF +``` + +Create the `AppWDBSecure` XRD: + +```bash +mkdir -p apis/appwdbsecure +cat > apis/appwdbsecure/definition.yaml <<'EOF' +apiVersion: apiextensions.crossplane.io/v2 +kind: CompositeResourceDefinition +metadata: + name: appwdbsecures.demo.upbound.io +spec: + group: demo.upbound.io + names: + categories: + - crossplane + kind: AppWDBSecure + plural: appwdbsecures + scope: Namespaced + versions: + - name: v1alpha1 + referenceable: true + schema: + openAPIV3Schema: + description: AppWDBSecure is the Schema for the AppWDBSecure API. + properties: + spec: + description: AppWDBSecureSpec defines the desired state of AppWDBSecure. + type: object + properties: + parameters: + type: object + description: AppWDBSecure configuration parameters + properties: + replicas: + type: integer + default: 2 + description: Number of app replicas + dbSize: + type: string + default: db.t3.micro + enum: + - db.t3.micro + - db.t3.small + - db.t3.medium + description: RDS instance class + region: + type: string + default: us-east-1 + description: AWS region + securityContext: + type: object + description: Optional security context for the application container + properties: + privileged: + type: boolean + description: Run container as privileged. Blocked by platform policy. + required: + - parameters + status: + description: AppWDBSecureStatus defines the observed state of AppWDBSecure. + type: object + required: + - spec + type: object + served: true +EOF +``` + +### Create the composition function + +The composition function is a KCL program that maps the user's 10-line request +to the full set of AWS resources. + +```bash +mkdir -p functions/compose-resources +cat > functions/compose-resources/kcl.mod <<'EOF' +[package] +name = "compose-resources" +version = "0.1.0" +EOF +``` + +Create `main.k`. This file is the entire composition logic. It reads the +composite resource and outputs every managed resource Crossplane creates: + +```bash +cat > functions/compose-resources/main.k <<'EOF' +oxr = option("params").oxr +ocds = option("params").ocds + +params = oxr.spec.parameters +appName = oxr.metadata.name +region = params.region or "us-east-1" +dbSize = params.dbSize or "db.t3.micro" +replicas = params.replicas or 2 + +_is_deleting = bool(oxr.metadata?.deletionTimestamp) +_db_key = "${appName}-db" +_instance_still_exists = _db_key in ocds + +_metadata = lambda name: str -> any { + { + namespace: oxr.metadata.namespace + annotations: {"krm.kcl.dev/composition-resource-name": name} + } +} + +_defaults = { + managementPolicies: ["*"] + providerConfigRef: {kind: "ProviderConfig", name: "default"} +} + +_subnets = [ + {cidrBlock: "10.0.1.0/24", availabilityZone: "${region}a", suffix: "a"} + {cidrBlock: "10.0.2.0/24", availabilityZone: "${region}b", suffix: "b"} + {cidrBlock: "10.0.3.0/24", availabilityZone: "${region}c", suffix: "c"} +] + +_sg_items = [{ + apiVersion: "rds.aws.m.upbound.io/v1beta1" + kind: "SubnetGroup" + metadata: _metadata("${appName}-subnet-group") | {name: "${appName}-subnet-group"} + spec: _defaults | { + forProvider: { + region: region + description: "${appName} DB subnet group" + subnetIdSelector: {matchControllerRef: True} + } + } +}] if not _is_deleting or _instance_still_exists else [] + +_db_items = [{ + apiVersion: "rds.aws.m.upbound.io/v1beta1" + kind: "Instance" + metadata: _metadata("${appName}-db") | { + name: "${appName}-db" + annotations: {"crossplane.io/external-name": "${appName}-db"} + } + spec: _defaults | { + forProvider: { + region: region + identifier: "${appName}-db" + engine: "postgres" + engineVersion: "16.6" + instanceClass: dbSize + username: "demoadmin" + dbName: "appdb" + autoGeneratePassword: True + passwordSecretRef: {namespace: oxr.metadata.namespace, name: "${appName}-db-password", key: "password"} + applyImmediately: True + skipFinalSnapshot: True + allocatedStorage: 20 + storageType: "gp3" + storageEncrypted: False + publiclyAccessible: False + backupRetentionPeriod: 0 + dbSubnetGroupNameSelector: {matchControllerRef: True} + } + initProvider: {identifier: "${appName}-db"} + } +}] if not _is_deleting else [] + +_items = [ + { + apiVersion: "ec2.aws.m.upbound.io/v1beta1" + kind: "VPC" + metadata: _metadata("${appName}-vpc") | {name: "${appName}-vpc"} + spec: _defaults | { + forProvider: { + region: region + cidrBlock: "10.0.0.0/16" + enableDnsHostnames: True + enableDnsSupport: True + tags: {"Name": "${appName}-vpc"} + } + } + } +] + [ + { + apiVersion: "ec2.aws.m.upbound.io/v1beta1" + kind: "Subnet" + metadata: _metadata("${appName}-subnet-${s.suffix}") | {name: "${appName}-subnet-${s.suffix}"} + spec: _defaults | { + forProvider: { + region: region + cidrBlock: s.cidrBlock + availabilityZone: s.availabilityZone + vpcIdSelector: {matchControllerRef: True} + tags: {"Name": "${appName}-subnet-${s.suffix}"} + } + } + } for s in _subnets +] + _sg_items + _db_items + [ + { + apiVersion: "iam.aws.m.upbound.io/v1beta1" + kind: "Role" + metadata: _metadata("${appName}-role") | {name: "${appName}-role"} + spec: _defaults | { + forProvider: { + assumeRolePolicy: '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":"ec2.amazonaws.com"},"Action":"sts:AssumeRole"}]}' + } + } + } + { + apiVersion: "apps/v1" + kind: "Deployment" + metadata: _metadata("${appName}-deployment") | {name: appName} + spec: { + replicas: replicas + selector: {matchLabels: {app: appName}} + template: { + metadata: {labels: {app: appName}} + spec: { + containers: [ + { + name: "app" + image: "public.ecr.aws/nginx/nginx:stable-alpine" + ports: [{containerPort: 80}] + } | ({securityContext: {privileged: params.securityContext.privileged}} if params?.securityContext?.privileged != None else {}) + ] + } + } + } + } +] + +items = _items +EOF +``` + +### Create example manifests + +Create the base example and the variants used in later steps: + +```bash +mkdir -p examples/appwdb +cat > examples/appwdb/example.yaml <<'EOF' +apiVersion: demo.upbound.io/v1alpha1 +kind: AppWDB +metadata: + name: demo-01 + namespace: demo +spec: + parameters: + replicas: 2 + dbSize: db.t3.micro + region: us-east-1 +EOF + +cat > examples/appwdb/variant-bigger-db.yaml <<'EOF' +apiVersion: demo.upbound.io/v1alpha1 +kind: AppWDB +metadata: + name: demo-01 + namespace: demo +spec: + parameters: + replicas: 2 + dbSize: db.t3.medium + region: us-east-1 +EOF + +cat > examples/appwdb/variant-more-replicas.yaml <<'EOF' +apiVersion: demo.upbound.io/v1alpha1 +kind: AppWDB +metadata: + name: demo-01 + namespace: demo +spec: + parameters: + replicas: 5 + dbSize: db.t3.micro + region: us-east-1 +EOF +``` + +Create the secure examples used in the policy enforcement step: + +```bash +mkdir -p examples/appwdbsecure +cat > examples/appwdbsecure/example-1.yaml <<'EOF' +apiVersion: demo.upbound.io/v1alpha1 +kind: AppWDBSecure +metadata: + name: kyverno-demo-01 + namespace: demo +spec: + parameters: + replicas: 2 + dbSize: db.t3.micro + region: us-east-1 + securityContext: + privileged: true +EOF + +cat > examples/appwdbsecure/example-2.yaml <<'EOF' +apiVersion: demo.upbound.io/v1alpha1 +kind: AppWDBSecure +metadata: + name: kyverno-demo-01 + namespace: demo +spec: + parameters: + replicas: 2 + dbSize: db.t3.micro + region: us-east-1 + securityContext: + privileged: false +EOF +``` + +### Create the ProviderConfig + +The `ProviderConfig` tells the AWS providers where to find credentials. + +```bash +mkdir -p setup/config +cat > setup/config/aws-provider-config.yaml <<'EOF' +apiVersion: aws.m.upbound.io/v1beta1 +kind: ProviderConfig +metadata: + name: default + namespace: demo +spec: + credentials: + source: Secret + secretRef: + namespace: demo + name: aws-secret + key: creds +EOF +``` + +## Configure AWS credentials + +The demo creates real AWS resources. Export credentials with permissions to +create VPCs, subnets, IAM roles, and RDS instances: + +```bash +export AWS_ACCESS_KEY_ID= +export AWS_SECRET_ACCESS_KEY= +``` + +## Start the project + +Open a dedicated terminal window and run from inside `app-w-db`: + +```bash +up project run --local --ingress +``` + +This command: + +- Creates a kind cluster named `up-app-w-db` +- Installs UXP into the cluster +- Builds and deploys the KCL composition function +- Installs the AWS providers declared in `upbound.yaml` +- Applies the XRDs from `apis/` +- Installs an ingress controller for the UXP console + +Startup takes several minutes. Keep this terminal open throughout the tutorial. + +:::warning +`up project run --local` may print `traces export: context deadline exceeded`. +This message reports a telemetry timeout and doesn't affect the cluster setup. +::: + +### Configure kubectl + +In your second terminal, point kubectl at the new cluster: + +```bash +kind get kubeconfig --name up-app-w-db > ~/.kube/config +``` + +:::warning +This overwrites your existing `~/.kube/config`. To preserve existing contexts, +use `kind get kubeconfig --name up-app-w-db > ~/.kube/config-upbound` and merge: +`KUBECONFIG=~/.kube/config:~/.kube/config-upbound kubectl config view --flatten > ~/.kube/config.merged && mv ~/.kube/config.merged ~/.kube/config` +::: + +Verify the connection: + +```bash +kubectl get nodes +``` + +### Apply AWS credentials + +1. Create the `demo` namespace: + + ```bash + kubectl create namespace demo + ``` + +2. Create a secret with your AWS credentials: + + ```bash + kubectl create secret generic aws-secret \ + -n demo \ + --from-literal=creds="$(printf '[default]\naws_access_key_id = %s\naws_secret_access_key = %s\n' \ + "$AWS_ACCESS_KEY_ID" "$AWS_SECRET_ACCESS_KEY")" + ``` + +### Verify the setup + +Check that providers are installed and healthy: + +```bash +kubectl get providers +``` + +Wait until all four providers show `HEALTHY: True` before continuing. + +:::warning +If this returns **No resources found**, `up project run --local` didn't +complete. Delete the cluster with `kind delete cluster --name up-app-w-db` and +restart. +::: + +Check that the composition function is healthy: + +```bash +kubectl get functions +``` + +The KCL function should show `HEALTHY: True`. + +:::warning +If this returns **No resources found**, the KCL function wasn't built or +deployed. Check the `up project run` terminal and restart. +::: + +### Apply the compositions + +Get the exact function name assigned by `up project run`: + +```bash +FUNC_NAME=$(kubectl get functions --no-headers | grep -v 'crossplane-contrib' | awk '{print $1}') +echo $FUNC_NAME +``` + +Apply both Compositions using that name: + +```bash +cat > apis/appwdb/composition.yaml < apis/appwdbsecure/composition.yaml < w-kyverno/addon-kyverno.yaml <<'EOF' + apiVersion: pkg.upbound.io/v1beta1 + kind: AddOn + metadata: + name: upbound-addon-kyverno + spec: + package: xpkg.upbound.io/upbound/addon-kyverno:3.7.0 + EOF + ``` + +2. Apply it: + + ```bash + kubectl apply -f w-kyverno/addon-kyverno.yaml + ``` + +3. In the UXP console, select **AddOns** in the left navigation. The + `upbound-addon-kyverno` entry appears and becomes healthy in about two + minutes. Or watch from the terminal: + + ```bash + kubectl get addons.pkg.upbound.io upbound-addon-kyverno -w + ``` + + Wait until `HEALTHY: True` before continuing. Press Ctrl+C when it does. + + If it stays `HEALTHY: False` after 5 minutes, check + `kubectl describe addons.pkg.upbound.io upbound-addon-kyverno` for events. + +4. Create the no-privileged-containers policy: + + ```bash + cat > w-kyverno/policy-no-privileged.yaml <<'EOF' + apiVersion: kyverno.io/v1 + kind: ClusterPolicy + metadata: + name: disallow-privileged-containers + annotations: + policies.kyverno.io/title: Disallow Privileged Containers + policies.kyverno.io/category: Pod Security + policies.kyverno.io/severity: high + policies.kyverno.io/description: >- + Privileged containers have unrestricted access to the host system. + This policy blocks any AppWDBSecure request with securityContext.privileged: true + before Crossplane composes any resources, so nothing reaches AWS. + spec: + validationFailureAction: Enforce + background: false + rules: + - name: no-privileged-platform-api + match: + any: + - resources: + kinds: + - AppWDBSecure + validate: + message: "Privileged containers are not allowed on this platform. Remove securityContext.privileged: true from your request." + pattern: + spec: + parameters: + =(securityContext): + =(privileged): "false" + - name: no-privileged-deployment + match: + any: + - resources: + kinds: + - Deployment + validate: + message: "Privileged containers are not allowed on this platform. Remove securityContext.privileged: true from your request." + pattern: + spec: + template: + spec: + containers: + - =(securityContext): + =(privileged): "false" + EOF + ``` + +5. Apply the policy: + + ```bash + kubectl apply -f w-kyverno/policy-no-privileged.yaml + ``` + + You may see this warning: + + ``` + Warning: the kind defined in the all match resource is invalid: unable to convert GVK to GVR for kinds AppWDBSecure + ``` + + This is expected if the XRDs were recently established and doesn't prevent + the policy from enforcing once the CRD is ready. + +6. Verify the policy is active: + + ```bash + kubectl get clusterpolicy disallow-privileged-containers + ``` + + `READY: True` means the policy is enforcing. + +### Block a privileged request + +:::warning +Kyverno can only evaluate requests for resource types whose CRDs are installed. +If you see `no matches for kind "AppWDBSecure"`, the XRD is not installed. +Confirm `kubectl get xrds` shows both XRDs as `ESTABLISHED: True`. +::: + +1. Try to apply a request with `privileged: true`: + + ```bash + kubectl apply -f examples/appwdbsecure/example-1.yaml + ``` + + The request is blocked immediately. The error references + `disallow-privileged-containers`. Nothing was created. Kyverno stopped the + request before Crossplane saw it. + + `demo-01`, deployed before Kyverno was installed, has a running RDS + instance. This request didn't start one. + +### Apply a compliant request + +1. Apply the compliant version (`privileged: false`): + + ```bash + kubectl apply -f examples/appwdbsecure/example-2.yaml + ``` + + The request passes the policy check and starts provisioning (~10 minutes). + +2. Watch the status: + + ```bash + kubectl get appwdbsecure -n demo -w + ``` + +## Change it live + +To change infrastructure, update the desired state. Crossplane figures out +what needs to change and does it. + +### Option A: Scale the database + +1. Apply the change: + + ```bash + kubectl apply -f examples/appwdb/variant-bigger-db.yaml + ``` + +2. `DESIRED` updates immediately; `ACTUAL` updates once AWS finishes (~5 minutes): + + ```bash + kubectl get instances.rds.aws.m.upbound.io demo-01-db -n demo -w \ + -o custom-columns='NAME:.metadata.name,DESIRED:.spec.forProvider.instanceClass,ACTUAL:.status.atProvider.instanceClass,SYNCED:.status.conditions[?(@.type=="Synced")].reason' + ``` + +3. In the AWS Console, check the **Status** and **Size** columns for `demo-01-db`. + +4. Confirm the change: + + ```bash + kubectl get appwdb demo-01 -n demo + ``` + +### Option B: Scale replicas + +1. Apply the change: + + ```bash + kubectl apply -f examples/appwdb/variant-more-replicas.yaml + ``` + +2. Watch the `Deployment` scale (~30 seconds): + + ```bash + kubectl get deployment demo-01 -n demo -w \ + -o custom-columns='NAME:.metadata.name,DESIRED:.spec.replicas,READY:.status.readyReplicas' + ``` + +3. Confirm the change: + + ```bash + kubectl get appwdb demo-01 -n demo + ``` + +In the UXP console, navigate to `demo-01` to see the full resource tree with +your updated values. + +## Clean up + +Delete the composite resources. Crossplane deletes all composed AWS resources +before removing each composite resource. + +```shell +kubectl delete appwdbsecure kyverno-demo-01 -n demo +kubectl delete appwdb demo-01 -n demo +``` + +RDS deletion takes 5 to 10 minutes. Wait until both are fully removed: + +```shell +kubectl get appwdb -n demo -w +kubectl get appwdbsecure -n demo -w +``` + +Delete the cluster: + +```shell +kind delete cluster --name up-app-w-db +``` + +## Next steps + +In this tutorial, you: + +- Created a Crossplane project with XRDs, Compositions, and a KCL function +- Deployed a composite resource that created a VPC, subnets, IAM role, RDS + instance, and Kubernetes `Deployment` from a 10-line manifest +- Explored the providers and ProviderConfigs that connected your platform to AWS +- Watched Crossplane detect and correct an out-of-band change to a VPC tag +- Blocked a privileged container request with Kyverno before it reached the cluster +- Updated live infrastructure by changing desired state + +Continue with: + +- [Composite Resource Definitions][xrd-concept]: design your own platform APIs +- [Composition functions][fn-docs]: write the logic that maps user requests to resources +- [Provider authentication][auth-docs]: connect providers to your own cloud account +- [Upbound Marketplace][marketplace]: providers and add-ons for AWS, Azure, GCP, and more + +[kubectl-install]: https://kubernetes.io/docs/tasks/tools/ +[up-cli-releases]: https://github.com/upbound/up/releases +[aws-cli]: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html +[kind]: https://kind.sigs.k8s.io/docs/user/quick-start/#installation +[fn-go]: /manuals/cli/howtos/compositions/go/ +[fn-python]: /manuals/cli/howtos/compositions/python/ +[fn-go-template]: /manuals/cli/howtos/compositions/go-template/ +[xrd-concept]: /manuals/packages/xrds/ +[fn-docs]: /manuals/cli/howtos/compositions/ +[auth-docs]: /manuals/packages/providers/authentication/ +[marketplace]: https://marketplace.upbound.io/ diff --git a/utils/vale/styles/Upbound/spelling-exceptions.txt b/utils/vale/styles/Upbound/spelling-exceptions.txt index f82746d1..06285da1 100644 --- a/utils/vale/styles/Upbound/spelling-exceptions.txt +++ b/utils/vale/styles/Upbound/spelling-exceptions.txt @@ -88,6 +88,10 @@ namespaces namespaced Netlify OAuth +Ollama +ollama's +Ollama's +ollama Okta OTEL overengineer @@ -142,6 +146,9 @@ UXP uxp vCluster vcluster +VPC +VPCs +VPC's virtualized Velero VMs