This is the terraform module that helps bootstrap foundation in AWS
This project uses release-please for the release flow of contributions
This release puts the three core EKS addons under Terraform management via
the EKS managed-addons API, with per-addon enable toggles. vpc-cni is now
opt-in (stack_enable_vpc_cni_addon defaults to false); kube-proxy and
coredns default to true. stack_use_vpc_cni_max_pods is removed.
| Addon | Default | Plan effect on an existing v6.x cluster |
|---|---|---|
| vpc-cni | false |
Nothing. Existing self-managed aws-node DaemonSet is left untouched and remains unmanaged. |
| kube-proxy | true |
+ create managed addon. OVERWRITE adopts the existing self-managed DaemonSet. No disruption. |
| coredns | true |
+ create managed addon. OVERWRITE adopts the existing self-managed Deployment. No disruption. |
If you want to keep vpc-cni under Terraform, set
stack_enable_vpc_cni_addon = true explicitly — the same OVERWRITE
adoption applies (no pod restarts).
stack_enable_vpc_cni_addon now drives both addon install and the
nodeadm maxPods=110 cloudinit:
| Old setting | New equivalent | Behavior |
|---|---|---|
stack_use_vpc_cni_max_pods = false (default) |
stack_enable_vpc_cni_addon = false (default) |
No managed vpc-cni; nodes get maxPods=110 cloudinit so an alternative CNI fits. |
stack_use_vpc_cni_max_pods = true |
stack_enable_vpc_cni_addon = true |
vpc-cni installed/adopted as managed addon; no maxPods cloudinit (ENI math drives pod density). |
Heads-up for users running self-managed vpc-cni today: with the new default (
false), the next node refresh will applymaxPods=110cloudinit even thoughaws-nodeis still running on your nodes. To preserve the prior pod-density behavior, setstack_enable_vpc_cni_addon = trueso the module manages vpc-cni and skips the cloudinit cap.
Because Terraform never managed your existing aws-node DaemonSet, simply
leaving stack_enable_vpc_cni_addon at its default false will not
remove it. Two paths:
- Two-step (recommended): set
stack_enable_vpc_cni_addon = true, apply (AWS adopts the DaemonSet viaOVERWRITE); then set it back tofalse, apply (managed addon is destroyed andpreserve = falseremoves the DaemonSet too). - Manual: leave the variable at
falseand runkubectl delete daemonset -n kube-system aws-nodeonce your replacement CNI is healthy.
The default behavior already supports this: keep
stack_enable_vpc_cni_addon = false, install your CNI out-of-band (Helm,
ArgoCD) using the existing outputs (eks_cluster_endpoint,
eks_cluster_certificate_authority_data, eks_oidc_provider_arn,
cluster_security_group_id, node_security_group_id, vpc). The
maxPods=110 nodeadm cloudinit is applied automatically.
Removal is destructive by design. Disabling any managed addon (
stack_enable_vpc_cni_addon,stack_enable_kube_proxy_addon,stack_enable_coredns_addon) after it has been adopted tells AWS to remove both the addon registration and its underlying workload (aws-node/kube-proxy/coredns) — the module setspreserve = falseso a CNI swap leaves a clean slate. For phased migrations where you want the workload to keep running after deregistration, setpreserve = trueper-addon viastack_cluster_addons_overrides(see "Power-user overrides" below).
| CNI | stack_enable_vpc_cni_addon |
stack_enable_kube_proxy_addon |
Notes |
|---|---|---|---|
| vpc-cni | true |
true (default) |
AWS native. IRSA / prefix delegation via *_overrides. |
| Cilium | false (default) |
false for kube-proxy-replace |
Install via Helm post-bootstrap. See Cilium docs for EKS. |
| Kube-OVN | false (default) |
true (default) |
Install via Helm/ArgoCD post-bootstrap. |
| Other | false (default) |
varies | Anything that wants a clean slate works the same way. |
module "foundation" {
# ...
stack_enable_vpc_cni_addon = false
stack_enable_kube_proxy_addon = false
stack_enable_coredns_addon = true
}Then install Cilium with kubeProxyReplacement=true per the
Cilium EKS install guide.
module "foundation" {
# ...
stack_enable_vpc_cni_addon = false
# kube-proxy and coredns stay enabled
}Install Kube-OVN per the upstream install docs.
Pin addon versions or pass addon-specific configuration (e.g. vpc-cni prefix
delegation) via stack_cluster_addons_overrides:
stack_cluster_addons_overrides = {
"vpc-cni" = {
configuration_values = jsonencode({
env = { ENABLE_PREFIX_DELEGATION = "true" }
})
# Keep the aws-node DaemonSet running after disabling the managed addon
# (e.g. for a phased CNI migration). Default is preserve = false.
preserve = true
}
"coredns" = {
addon_version = "v1.11.4-eksbuild.2"
most_recent = false
}
}| Name | Version |
|---|---|
| terraform | >= 1.5.7 |
| aws | >= 6.14.1 |
| Name | Version |
|---|---|
| aws | 6.42.0 |
| Name | Source | Version |
|---|---|---|
| cert_manager_irsa_role | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts | 6.6.0 |
| ebs_csi_driver_irsa_role | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts | 6.6.0 |
| eks | terraform-aws-modules/eks/aws | 21.19.0 |
| external_dns_irsa_role | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts | 6.6.0 |
| fck_nat | RaJiska/fck-nat/aws | 1.4.0 |
| karpenter | terraform-aws-modules/eks/aws//modules/karpenter | 21.19.0 |
| load_balancer_controller_irsa_role | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts | 6.6.0 |
| s3_csi | terraform-aws-modules/s3-bucket/aws | 5.12.0 |
| s3_driver_irsa_role | terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts | 6.6.0 |
| vpc | terraform-aws-modules/vpc/aws | 6.6.1 |
| Name | Type |
|---|---|
| aws_eip.main | resource |
| aws_vpc_endpoint.eks_vpc_endpoints | resource |
| aws_ami.main | data source |
| aws_caller_identity.current | data source |
| aws_iam_policy_document.source | data source |
| aws_partition.current | data source |
| aws_region.current | data source |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| initial_instance_types | instance types of the initial managed node group | list(string) |
n/a | yes |
| cluster_enabled_log_types | List of EKS control plane log types to enable. Valid values: api, audit, authenticator, controllerManager, scheduler. | list(string) |
[] |
no |
| cluster_endpoint_public_access | Whether the EKS cluster API server endpoint is publicly accessible. Set to false for private-only access (requires VPC connectivity). | bool |
true |
no |
| create_node_security_group | Whether to create a dedicated security group for EKS managed node groups. When true, the node_security_group_id output is populated. | bool |
false |
no |
| eks_cluster_version | Kubernetes version to set for the cluster | string |
"1.35" |
no |
| extra_access_entries | EKS access entries needed by IAM roles interacting with this cluster | list(object({ |
[] |
no |
| initial_node_desired_size | desired size of the initial managed node group | number |
3 |
no |
| initial_node_labels | labels for the initial managed node group | map(string) |
{ |
no |
| initial_node_max_size | max size of the initial managed node group | number |
6 |
no |
| initial_node_min_size | minimum size of the initial managed node group | number |
2 |
no |
| initial_node_taints | taints for the initial managed node group | map(object({ key = string, value = string, effect = string })) |
{ |
no |
| permissions_boundary | IAM permissions boundary policy name applied to all IAM roles. When set, constructs full ARN from the current account and partition. | string |
"" |
no |
| s3_csi_driver_bucket_arns | existing buckets the s3 CSI driver should have access to | list(string) |
[] |
no |
| s3_csi_driver_create_bucket | create a new bucket for use with the s3 CSI driver | bool |
true |
no |
| stack_admin_arns | arn to the roles for the cluster admins role | list(string) |
[] |
no |
| stack_cluster_addons_overrides | Per-addon overrides keyed by addon name (e.g. "vpc-cni", "kube-proxy", "coredns"). Merges over module defaults — use for version pinning, vpc-cni prefix delegation, custom networking, etc. Accepts any attributes supported by terraform-aws-modules/eks/aws v21+ addons map. |
any |
{} |
no |
| stack_create | should resources be created | bool |
true |
no |
| stack_create_pelotech_nat_eip | should create pelotech nat eip even if NAT isn't enabled - nice for getting ips created for allow lists | bool |
false |
no |
| stack_enable_cluster_kms | Should secrets be encrypted by kms in the cluster | bool |
true |
no |
| stack_enable_coredns_addon | Install coredns as a managed addon. Note: coredns will not schedule until a CNI is running and nodes are Ready. | bool |
true |
no |
| stack_enable_default_eks_managed_node_group | Ability to disable default node group | bool |
true |
no |
| stack_enable_kube_proxy_addon | Install kube-proxy as a managed addon. Set false when using Cilium with kube-proxy replacement enabled. | bool |
true |
no |
| stack_enable_vpc_cni_addon | Install AWS VPC CNI as a managed addon. Defaults to false so the cluster comes up CNI-less and consumers pick a CNI (Cilium, Kube-OVN, or vpc-cni). Set true to install vpc-cni as a managed addon. When false, nodeadm maxPods=110 cloudinit is applied automatically. | bool |
false |
no |
| stack_existing_vpc_config | Setting the VPC | object({ |
null |
no |
| stack_name | Name of the stack | string |
"foundation-stack" |
no |
| stack_pelotech_nat_ami_name_filter | ami name filter to find the correct ami | string |
"fck-nat-al2023-hvm-*" |
no |
| stack_pelotech_nat_ami_owner_id | Owner ID to search of ami | string |
"568608671756" |
no |
| stack_pelotech_nat_enabled | Use pelotech-nat as NAT instances instead of NAT gateway | bool |
false |
no |
| stack_pelotech_nat_instance_type | choose instance based on bandwitch requirements | string |
"t4g.micro" |
no |
| stack_ro_arns | arn to the roles for the cluster read only role, these will also have KMS readonly access for CI plan purposes, more limited access should use the extra entries | list(string) |
[] |
no |
| stack_tags | tags to be added to the stack, should at least have Owner and Environment | map(string) |
{ |
no |
| stack_vpc_block | Variables for defining the vpc for the stack | object({ |
{ |
no |
| vpc_endpoints | vpc endpoints within the cluster vpc network, note: this only works when using the internal created VPC | list(string) |
[] |
no |
| Name | Description |
|---|---|
| cert_manager_role_arn | ARN of the Cert Manager IRSA role |
| cluster_security_group_id | Cluster security group that was created by Amazon EKS for the cluster |
| ebs_csi_driver_role_arn | ARN of the EBS CSI driver IRSA role |
| eks_cluster_certificate_authority_data | Base64 encoded certificate data for the cluster |
| eks_cluster_endpoint | The endpoint for the EKS cluster API server |
| eks_cluster_iam_role_name | The name of the EKS cluster IAM role |
| eks_cluster_name | The name of the EKS cluster |
| eks_cluster_tls_certificate_sha1_fingerprint | The SHA1 fingerprint of the public key of the cluster's certificate |
| eks_managed_node_groups | Map of attribute maps for all EKS managed node groups created |
| eks_managed_node_groups_autoscaling_group_names | List of the autoscaling group names created by EKS managed node groups |
| eks_oidc_provider | The OpenID Connect identity provider (issuer URL without leading https://) |
| eks_oidc_provider_arn | EKS OIDC provider ARN to be able to add IRSA roles to the cluster out of band |
| external_dns_role_arn | ARN of the External DNS IRSA role |
| karpenter_node_iam_role_name | The name of the Karpenter node IAM role |
| karpenter_queue_name | The name of the Karpenter SQS queue |
| karpenter_role_arn | ARN of the Karpenter IRSA role |
| kms_key_arn | The Amazon Resource Name (ARN) of the KMS key |
| load_balancer_controller_role_arn | ARN of the ALB controller IRSA role |
| node_security_group_id | ID of the node shared security group |
| s3_csi_driver_role_arn | ARN of the S3 CSI driver IRSA role |
| vpc | The vpc object when it's created |