This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Amazon EKS Anywhere

EKS Anywhere documentation homepage

EKS Anywhere is container management software built by AWS that makes it easier to run and manage Kubernetes clusters on-premises and at the edge. EKS Anywhere is built on EKS Distro , which is the same reliable and secure Kubernetes distribution used by Amazon Elastic Kubernetes Service (EKS) in AWS Cloud. EKS Anywhere simplifies Kubernetes cluster management through the automation of undifferentiated heavy lifting such as infrastructure setup and Kubernetes cluster lifecycle operations.

Unlike Amazon EKS in AWS Cloud, EKS Anywhere is a user-managed product that runs on user-managed infrastructure. You are responsible for cluster lifecycle operations and maintenance of your EKS Anywhere clusters.

The tenets of the EKS Anywhere project are:

  • Simple: Make using a Kubernetes distribution simple and boring (reliable and secure).
  • Opinionated Modularity: Provide opinionated defaults about the best components to include with Kubernetes, but give customers the ability to swap them out
  • Open: Provide open source tooling backed, validated and maintained by Amazon
  • Ubiquitous: Enable customers and partners to integrate a Kubernetes distribution in the most common tooling.
  • Stand Alone: Provided for use anywhere without AWS dependencies
  • Better with AWS: Enable AWS customers to easily adopt additional AWS services

1 - Overview

What is EKS Anywhere?

EKS Anywhere is container management software built by AWS that makes it easier to run and manage Kubernetes clusters on-premises and at the edge. EKS Anywhere is built on EKS Distro , which is the same reliable and secure Kubernetes distribution used by Amazon Elastic Kubernetes Service (EKS) in AWS Cloud. EKS Anywhere simplifies Kubernetes cluster management through the automation of undifferentiated heavy lifting such as infrastructure setup and Kubernetes cluster lifecycle operations.

Unlike Amazon EKS in AWS Cloud, EKS Anywhere is a user-managed product that runs on user-managed infrastructure. You are responsible for cluster lifecycle operations and maintenance of your EKS Anywhere clusters. EKS Anywhere is open source and free to use at no cost. To receive support for your EKS Anywhere clusters, you can optionally purchase EKS Anywhere Enterprise Subscriptions for 24/7 support from AWS subject matter experts and access to EKS Anywhere Curated Packages . EKS Anywhere Curated Packages are software packages that are built, tested, and supported by AWS and extend the core functionalities of Kubernetes on your EKS Anywhere clusters.

EKS Anywhere supports many different types of infrastructure including VMWare vSphere, Bare Metal, Nutanix, Apache CloudStack, and AWS Snow. You can run EKS Anywhere without a connection to AWS Cloud and in air-gapped environments, or you can optionally connect to AWS Cloud to integrate with other AWS services. You can use the EKS Connector to view your EKS Anywhere clusters in the Amazon EKS console, AWS IAM to authenticate to your EKS Anywhere clusters, IAM Roles for Service Accounts (IRSA) to authenticate Pods with other AWS services, and AWS Distro for OpenTelemetry to send metrics to Amazon Managed Prometheus for monitoring cluster resources.

EKS Anywhere is built on the Kubernetes sub-project called Cluster API (CAPI), which is focused on providing declarative APIs and tooling to simplify the provisioning, upgrading, and operating of multiple Kubernetes clusters. While EKS Anywhere simplifies and abstracts the CAPI primitives, it is useful to understand the basics of CAPI when using EKS Anywhere.

Why EKS Anywhere?

  • Simplify and automate Kubernetes management on-premises
  • Unify Kubernetes distribution and support across on-premises, edge, and cloud environments
  • Adopt modern operational practices and tools on-premises
  • Build on open source standards

Common Use Cases

  • Modernize on-premises applications from virtual machines to containers
  • Internal development platforms to standardize how teams consume Kubernetes across the organization
  • Telco 5G Radio Access Networks (RAN) and Core workloads
  • Regulated services in private data centers on-premises

What’s Next?

1.1 - Frequently Asked Questions

Frequently asked questions about EKS Anywhere

AuthN / AuthZ

How do my applications running on EKS Anywhere authenticate with AWS services using IAM credentials?

You can now leverage the IAM Role for Service Account (IRSA) feature by following the IRSA reference guide for details.

Does EKS Anywhere support OIDC (including Azure AD and AD FS)?

Yes, EKS Anywhere can create clusters that support API server OIDC authentication. This means you can federate authentication through AD FS locally or through Azure AD, along with other IDPs that support the OIDC standard. In order to add OIDC support to your EKS Anywhere clusters, you need to configure your cluster by updating the configuration file before creating the cluster. Please see the OIDC reference for details.

Does EKS Anywhere support LDAP?

EKS Anywhere does not support LDAP out of the box. However, you can look into the Dex LDAP Connector .

Can I use AWS IAM for Kubernetes resource access control on EKS Anywhere?

Yes, you can install the aws-iam-authenticator on your EKS Anywhere cluster to achieve this.

Miscellaneous

How much does EKS Anywhere cost?

EKS Anywhere is free, open source software that you can download, install on your existing hardware, and run in your own data centers. It includes management and CLI tooling for all supported cluster topologies on all supported providers . You are responsible for providing infrastructure where EKS Anywhere runs (e.g. VMware, bare metal), and some providers require third party hardware and software contracts.

The EKS Anywhere Enterprise Subscription provides access to curated packages and enterprise support. This is an optional—but recommended—cost based on how many clusters and how many years of support you need.

Can I connect my EKS Anywhere cluster to EKS?

Yes, you can install EKS Connector to connect your EKS Anywhere cluster to AWS EKS. EKS Connector is a software agent that you can install on the EKS Anywhere cluster that enables the cluster to communicate back to AWS. Once connected, you can immediately see a read-only view of the EKS Anywhere cluster with workload and cluster configuration information on the EKS console, alongside your EKS clusters.

How does the EKS Connector authenticate with AWS?

During start-up, the EKS Connector generates and stores an RSA key-pair as Kubernetes secrets. It also registers with AWS using the public key and the activation details from the cluster registration configuration file. The EKS Connector needs AWS credentials to receive commands from AWS and to send the response back. Whenever it requires AWS credentials, it uses its private key to sign the request and invokes AWS APIs to request the credentials.

How does the EKS Connector authenticate with my Kubernetes cluster?

The EKS Connector acts as a proxy and forwards the EKS console requests to the Kubernetes API server on your cluster. In the initial release, the connector uses impersonation with its service account secrets to interact with the API server. Therefore, you need to associate the connector’s service account with a ClusterRole, which gives permission to impersonate AWS IAM entities.

How do I enable an AWS user account to view my connected cluster through the EKS console?

For each AWS user or other IAM identity, you should add cluster role binding to the Kubernetes cluster with the appropriate permission for that IAM identity. Additionally, each of these IAM entities should be associated with the IAM policy to invoke the EKS Connector on the cluster.

Can I use Amazon Controllers for Kubernetes (ACK) on EKS Anywhere?

Yes, you can leverage AWS services from your EKS Anywhere clusters on-premises through Amazon Controllers for Kubernetes (ACK) .

Can I deploy EKS Anywhere on other clouds?

EKS Anywhere can be installed on any infrastructure with the required Bare Metal, Cloudstack, or VMware vSphere components. See EKS Anywhere Baremetal , CloudStack , or vSphere documentation.

How is EKS Anywhere different from ECS Anywhere?

Amazon ECS Anywhere is an option for Amazon Elastic Container Service (ECS) to run containers on your on-premises infrastructure. The ECS Anywhere Control Plane runs in an AWS region and allows you to install the ECS agent on worker nodes that run outside of an AWS region. Workloads that run on ECS Anywhere nodes are scheduled by ECS. You are not responsible for running, managing, or upgrading the ECS Control Plane.

EKS Anywhere runs the Kubernetes Control Plane and worker nodes on your infrastructure. You are responsible for managing the EKS Anywhere Control Plane and worker nodes. There is no requirement to have an AWS account to run EKS Anywhere.

If you’d like to see how EKS Anywhere compares to EKS please see the information here.

How can I manage EKS Anywhere at scale?

You can perform cluster life cycle and configuration management at scale through GitOps-based tools. EKS Anywhere offers git-driven cluster management through the integrated Flux Controller. See Manage cluster with GitOps documentation for details.

Can I run EKS Anywhere on ESXi?

No. EKS Anywhere is only supported on providers listed on the EKS Anywhere providers page. There would need to be a change to the upstream project to support ESXi.

Can I deploy EKS Anywhere on a single node?

Yes. Single node cluster deployment is supported for Bare Metal. See workerNodeGroupConfigurations

1.2 - Partners

EKS Anywhere validated partners

Amazon EKS Anywhere maintains relationships with third-party vendors to provide add-on solutions for EKS Anywhere clusters. A complete list of these partners is maintained on the Amazon EKS Anywhere Partners page. See Conformitron: Validate third-party software with Amazon EKS and Amazon EKS Anywhere for information on how conformance testing and quality assurance is done on this software.

The following shows validated EKS Anywhere partners whose products have passed conformance test for specific EKS Anywhere providers and versions:

Bare Metal provider validated partners

Kubernetes Version :  1.27 
Date of Conformance Test : 2024-05-02
 
Following ISV Partners have Validated their Conformance : 
 
VENDOR_PRODUCT   VENDOR_PRODUCT_TYPE          VENDOR_PRODUCT_VERSION
aqua             aqua-enforcer                2022.4.20
dynatrace        dynatrace                    0.10.1
komodor          k8s-watcher                  1.15.5
kong             kong-enterprise              2.27.0
accuknox         kubearmor                    v1.3.2
kubecost         cost-analyzer                2.1.0
nirmata          enterprise-kyverno           1.6.10
lacework         polygraph                    6.11.0
newrelic         nri-bundle                   5.0.64
perfectscale     perfectscale                 v0.0.38
pulumi           pulumi-kubernetes-operator   0.3.0
solo.io          solo-istiod                  1.18.3-eks-a
sysdig           sysdig-agent                 1.6.3
tetrate.io       tetrate-istio-distribution   1.18.1
hashicorp        vault                        0.25.0

vSphere provider validated partners

Kubernetes Version :  1.28 
Date of Conformance Test : 2024-05-02
 
Following ISV Partners have Validated their Conformance : 
 
VENDOR_PRODUCT   VENDOR_PRODUCT_TYPE          VENDOR_PRODUCT_VERSION
aqua             aqua-enforcer                2022.4.20
dynatrace        dynatrace                    0.10.1
komodor          k8s-watcher                  1.15.5
kong             kong-enterprise              2.27.0
accuknox         kubearmor                    v1.3.2
kubecost         cost-analyzer                2.1.0
nirmata          enterprise-kyverno           1.6.10
lacework         polygraph                    6.11.0
newrelic         nri-bundle                   5.0.64
perfectscale     perfectscale                 v0.0.38
pulumi           pulumi-kubernetes-operator   0.3.0
solo.io          solo-istiod                  1.18.3-eks-a
sysdig           sysdig-agent                 1.6.3
tetrate.io       tetrate-istio-distribution   1.18.1
hashicorp        vault                        0.25.0

AWS Snow provider validated partners

Kubernetes Version :  1.28 
Date of Conformance Test : 2023-11-10
 
Following ISV Partners have Validated their Conformance : 
 
VENDOR_PRODUCT   VENDOR_PRODUCT_TYPE
dynatrace        dynatrace
solo.io          solo-istiod
komodor          k8s-watcher
kong             kong-enterprise
accuknox         kubearmor
kubecost         cost-analyzer
nirmata          enterprise-kyverno
lacework         polygraph
suse             neuvector
newrelic         newrelic-bundle
perfectscale     perfectscale
pulumi           pulumi-kubernetes-operator
sysdig           sysdig-agent
hashicorp        vault

AWS Outpost provider validated partners

Kubernetes Version :  1.27 
Date of Conformance Test : 2024-05-02
 
Following ISV Partners have Validated their Conformance : 
 
VENDOR_PRODUCT   VENDOR_PRODUCT_TYPE          VENDOR_PRODUCT_VERSION
aqua             aqua-enforcer                2022.4.20
dynatrace        dynatrace                    0.10.1
komodor          k8s-watcher                  1.15.5
kong             kong-enterprise              2.27.0
accuknox         kubearmor                    v1.3.2
kubecost         cost-analyzer                2.1.0
nirmata          enterprise-kyverno           1.6.10
lacework         polygraph                    6.11.0
perfectscale     perfectscale                 v0.0.38
pulumi           pulumi-kubernetes-operator   0.3.0
solo.io          solo-istiod                  1.18.3-eks-a
sysdig           sysdig-agent                 1.6.3
tetrate.io       tetrate-istio-distribution   1.18.1
hashicorp        vault                        0.25.0

2 - What's New

New EKS Anywhere releases, features, and fixes

2.1 - Changelog

Changelog for EKS Anywhere releases

v0.19.10

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.22.0 *
RHEL 8.x
RHEL 9.x

* EKS Anywhere issue regarding deprecation of Bottlerocket bare metal variants

Upgraded

  • EKS Distro:
  • EKS Anywhere Packages: v0.4.3 to v0.4.4
  • Cilium: v1.13.18 to v1.13.19
  • containerd: v1.7.20 to v1.7.22
  • runc: v1.1.13 to v1.1.14
  • local-path-provisioner: v0.0.28 to v0.0.29
  • etcdadm-controller: v1.0.22 to v1.0.23
  • New base images with CVE fixes for Amazon Linux 2

v0.19.9

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.19.2 *
RHEL 8.x
RHEL 9.x

* EKS Anywhere issue regarding deprecation of Bottlerocket bare metal variants

Upgraded

v0.19.8

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.19.2 *
RHEL 8.x
RHEL 9.x

* EKS Anywhere issue regarding deprecation of Bottlerocket bare metal variants

Upgraded

Changed

  • Added additional validation before marking controlPlane and workers ready #8455

Fixed

  • Fix panic when datacenter obj is not found #8494

v0.19.7

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.19.2 *
RHEL 8.x
RHEL 9.x

* EKS Anywhere issue regarding deprecation of Bottlerocket bare metal variants

Upgraded

Changed

  • Updated cluster status reconciliation logic for worker node groups with autoscaling configuration #8254
  • Added logic to apply new hardware on baremetal cluster upgrades #8288

Fixed

  • Fixed bug when installer does not create CCM secret for Nutanix workload cluster #8191
  • Fixed upgrade workflow for registry mirror certificates in EKS Anywhere packages #7114

v0.19.6

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.19.2 *
RHEL 8.x
RHEL 9.x

* EKS Anywhere issue regarding deprecation of Bottlerocket bare metal variants

Changed

Fixed

  • Fixed cluster directory being created with root ownership #8120

v0.19.5

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.19.2 *
RHEL 8.x
RHEL 9.x

* EKS Anywhere issue regarding deprecation of Bottlerocket bare metal variants

Changed

  • Upgraded EKS-Anywhere Packages from v0.4.2 to v0.4.3

Fixed

  • Fixed registry mirror with authentication for EKS Anywhere packages

v0.19.4

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.19.2 *
RHEL 8.x
RHEL 9.x

* EKS Anywhere issue regarding deprecation of Bottlerocket bare metal variants

Changed

Fixed

  • Added processor for Tinkerbell Template Config #7816
  • Added nil check for eksa-version when setting etcd url #8018
  • Fixed registry mirror secret credentials set to empty #7933

v0.19.3

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.19.2 *
RHEL 8.x
RHEL 9.x

* EKS Anywhere issue regarding deprecation of Bottlerocket bare metal variants

Changed

  • Updated helm to v3.14.3 #3050

Fixed

v0.19.2

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.19.2 *
RHEL 8.x
RHEL 9.x

* EKS Anywhere issue regarding deprecation of Bottlerocket bare metal variants

Changed

Fixed

  • Fixing tinkerbell action image URIs while using registry mirror with proxy cache.

v0.19.1

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.19.2 *
RHEL 8.x
RHEL 9.x

* EKS Anywhere issue regarding deprecation of Bottlerocket bare metal variants

Changed

Added

  • Preflight check for upgrade management components such that it ensures management components is at most 1 EKS Anywhere minor version greater than the EKS Anywhere version of cluster components #7800 .

Fixed

  • EKS Anywhere package bundles ending with 152, 153, 154, 157 have image tag issues which have been resolved in bundle 158. Example for kubernetes version v1.29 we have public.ecr.aws/eks-anywhere/eks-anywhere-packages-bundles:v1-29-158
  • Fixed InPlace custom resources from being created again after a successful node upgrade due to delay in objects in client cache #7779 .
  • Fixed #7623 by encoding the basic auth credentials to base64 when using them in templates #7829 .
  • Added a fix for error that may occur during upgrading management components where if the cluster object is modified by another process before applying, it throws the conflict error prompting a retry.

v0.19.0

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.19.0 *
RHEL 8.x
RHEL 9.x

* EKS Anywhere issue regarding deprecation of Bottlerocket bare metal variants

Added

  • Support for Kubernetes v1.29
  • Support for in-place EKS Anywhere and Kubernetes version upgrades on Bare Metal clusters
  • Support for horizontally scaling etcd count in clusters with external etcd deployments (#7127 )
  • External etcd support for Nutanix (#7550 )
  • Etcd encryption for Nutanix (#7565 )
  • Nutanix Cloud Controller Manager integration (#7534 )
  • Enable image signing for all images used in cluster operations
  • RedHat 9 support for CloudStack (#2842 )
  • New upgrade management-components command which upgrades management components independently of cluster components (#7238 )
  • New upgrade plan management-components command which provides new release versions for the next management components upgrade (#7447 )
  • Make maxUnhealthy count configurable for control plane and worker machines (#7281 )

Changed

  • Unification of controller and CLI workflows for cluster lifecycle operations such as create, upgrade, and delete
  • Perform CAPI Backup on workload cluster during upgrade(#7364 )
  • Extend maxSurge and maxUnavailable configuration support to all providers
  • Upgraded Cilium to v1.13.19
  • Upgraded EKS-D:
  • Cluster API Provider AWS Snow: v0.1.26 to v0.1.27
  • Cluster API: v1.5.2 to v1.6.1
  • Cluster API Provider vSphere: v1.7.4 to v1.8.5
  • Cluster API Provider Nutanix: v1.2.3 to v1.3.1
  • Flux: v2.0.0 to v2.2.3
  • Kube-vip: v0.6.0 to v0.7.0
  • Image-builder: v0.1.19 to v0.1.24
  • Kind: v0.20.0 to v0.22.0

Removed

Fixed

  • Validate OCI namespaces for registry mirror on Bottlerocket (#7257 )
  • Make Cilium reconciler use provider namespace when generating network policy (#7705 )

v0.18.7

Tool Upgrade

  • EKS Anywhere v0.18.7 Admin AMI with CVE fixes for Amazon Linux 2

Supported Operating Systems

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.19.0
RHEL 8.7
RHEL 9.x

v0.18.6

Tool Upgrade

  • EKS Anywhere v0.18.6 Admin AMI with CVE fixes for runc
  • New base images with CVE fixes for Amazon Linux 2
  • Bottlerocket v1.15.1 to 1.19.0
  • runc v1.1.10 to v1.1.12 (CVE-2024-21626 )
  • containerd v1.7.11 to v.1.7.12

Supported Operating Systems

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.19.0
RHEL 8.x
RHEL 9.x

v0.18.5

Tool Upgrade

  • New EKS Anywhere Admin AMI with CVE fixes for Amazon Linux 2
  • New base images with CVE fixes for Amazon Linux 2

Supported Operating Systems

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.15.1
RHEL 8.7
RHEL 9.x

v0.18.4

Feature

  • Nutanix: Enable api-server audit logging for Nutanix (#2664 )

Bug

  • CNI reconciler now properly pulls images from registry mirror instead of public ECR in airgapped environments: #7170

Supported Operating Systems

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.15.1
RHEL 8.7
RHEL 9.x

v0.18.3

Fixed

  • Etcdadm: Renew client certificates when nodes rollover (etcdadm/#56 )
  • Include DefaultCNIConfigured condition in Cluster Ready status except when Skip Upgrades is enabled (#7132 )

Tool Upgrade

  • EKS Distro (Kubernetes):
    • v1.25.15 to v1.25.16
    • v1.26.10 to v1.26.11
    • v1.27.7 to v1.27.8
    • v1.28.3 to v1.28.4
  • Etcdadm Controller: v1.0.15 to v1.0.16

Supported Operating Systems

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.15.1
RHEL 8.7
RHEL 9.x

v0.18.2

Fixed

  • Image Builder: Correctly parse no_proxy inputs when both Red Hat Satellite and Proxy is used in image-builder. (#2664 )
  • vSphere: Fix template tag validation by specifying the full template path (#6437 )
  • Bare Metal: Skip kube-vip deployment when TinkerbellDatacenterConfig.skipLoadBalancerDeployment is set to true. (#6990 )

Other

  • Security: Patch incorrect conversion between uint64 and int64 (#7048 )
  • Security: Fix incorrect regex for matching curated package registry URL (#7049 )
  • Security: Patch malicious tarballs directory traversal vulnerability (#7057 )

Supported Operating Systems

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.15.1
RHEL 8.7
RHEL 9.x

v0.18.1

Tool Upgrade

  • EKS Distro (Kubernetes):
    • v1.25.14 to v1.25.15
    • v1.26.9 to v1.26.10
    • v1.27.6 to v1.27.7
    • v1.28.2 to v1.28.3
  • Etcdadm Bootstrap Provider: v1.0.9 to v1.0.10
  • Etcdadm Controller: v1.0.14 to v1.0.15
  • Cluster API Provider CloudStack: v0.4.9-rc7 to v0.4.9-rc8
  • EKS Anywhere Packages Controller : v0.3.12 to v0.3.13

Bug

  • Bare Metal: Ensure the Tinkerbell stack continues to run on management clusters when worker nodes are scaled to 0 (#2624 )

Supported Operating Systems

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.15.1
RHEL 8.7
RHEL 9.x

v0.18.0

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04 20.04 20.04 Not supported 20.04
22.04 22.04 22.04 Not supported Not supported
Bottlerocket 1.15.1 1.15.1 Not supported Not supported Not supported
RHEL 8.7 8.7 9.x, 8.7 8.7 Not supported

Added

  • Etcd encryption for CloudStack and vSphere: #6557
  • Generate TinkerbellTemplateConfig command: #3588
  • Support for modular Kubernetes version upgrades with bare metal: #6735
    • OSImageURL added to Tinkerbell Machine Config
  • Bare metal out-of-band webhook: #5738
  • Support for Kubernetes v1.28
  • Support for air gapped image building: #6457
  • Support for RHEL 8 and RHEL 9 for Nutanix provider: #6822
  • Support proxy configuration on Redhat image building #2466
  • Support Redhat Satellite in image building #2467

Changed

  • KinD-less upgrades: #6622
    • Management cluster upgrades don’t require a local bootstrap cluster anymore.
    • The control plane of management clusters can be upgraded through the API. Previously only changes to the worker nodes were allowed.
  • Increased control over upgrades by separating external etcd reconciliation from control plane nodes: #6496
  • Upgraded Cilium to 1.12.15
  • Upgraded EKS-D:
  • Cluster API Provider CloudStack: v0.4.9-rc6 to v0.4.9-rc7
  • Cluster API Provider AWS Snow: v0.1.26 to v0.1.27
  • Upgraded CAPI to v1.5.2

Removed

  • Support for Kubernetes v1.23

Fixed

  • Fail on eksctl anywhere upgrade cluster plan -f: #6716
  • Error out when management kubeconfig is not present for workload cluster operations: 6501
  • Empty vSphereMachineConfig users fails CLI upgrade: 5420
  • CLI stalls on upgrade with Flux Gitops: 6453

v0.17.6

Bug

  • CNI reconciler now properly pulls images from registry mirror instead of public ECR in airgapped environments: #7170
  • Waiting for control plane to be fully upgraded: #6764

Other

  • Check for k8s version in the Cloudstack template name: #7130

Supported Operating Systems

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.14.3
RHEL 8.7 _

v0.17.5

Tool Upgrade

  • Cluster API Provider CloudStack: v0.4.9-rc7 to v0.4.9-rc8

Supported Operating Systems

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04
Ubuntu 22.04
Bottlerocket 1.14.3
RHEL 8.7 _

v0.17.4

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04 20.04 20.04 Not supported 20.04
22.04 22.04 22.04 Not supported Not supported
Bottlerocket 1.14.3 1.14.3 Not supported Not supported Not supported
RHEL 8.7 8.7 Not supported 8.7 Not supported

Added

  • Enabled audit logging for kube-apiserver on baremetal provider (#6779 ).

v0.17.3

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04 20.04 20.04 Not supported 20.04
22.04 22.04 22.04 Not supported Not supported
Bottlerocket 1.14.3 1.14.3 Not supported Not supported Not supported
RHEL 8.7 8.7 Not supported 8.7 Not supported

Fixed

  • Fixed cli upgrade mgmt kubeconfig flag (#6666 )
  • Ignore node taints when scheduling Cilium preflight daemonset (#6697 )
  • Baremetal: Prevent bare metal machine config references from changing to existing machine configs (#6674 )

v0.17.2

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04 20.04 20.04 Not supported 20.04
22.04 22.04 22.04 Not supported Not supported
Bottlerocket 1.14.0 1.14.0 Not supported Not supported Not supported
RHEL 8.7 8.7 Not supported 8.7 Not supported

Fixed

  • Bare Metal: Ensure new worker node groups can reference new machine configs (#6615 )
  • Bare Metal: Fix writefile action to ensure Bottlerocket configs write content or error (#2441 )

Added

  • Added support for configuring healthchecks on EtcdadmClusters using etcdcluster.cluster.x-k8s.io/healthcheck-retries annotation (aws/etcdadm-controller#44 )
  • Add check for making sure quorum is maintained before deleting etcd machines (aws/etcdadm-controller#46 )

Changed

v0.17.1

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04 20.04 20.04 Not supported 20.04
22.04 22.04 22.04 Not supported Not supported
Bottlerocket 1.14.0 1.14.0 Not supported Not supported Not supported
RHEL 8.7 8.7 Not supported 8.7 Not supported

Fixed

  • Fix worker node groups being rolled when labels adjusted #6330
  • Fix worker node groups being rolled out when taints are changed #6482
  • Fix vSphere template tags validation to run on the control plane and etcd VSpherMachinesConfig #6591
  • Fix Bare Metal upgrade with custom pod CIDR #6442

Added

  • Add validation for missing management cluster kubeconfig during workload cluster operations #6501

v0.17.0

Supported OS version details

vSphere Bare Metal Nutanix CloudStack Snow
Ubuntu 20.04 20.04 20.04 Not supported 20.04
22.04 22.04 22.04 Not supported Not supported
Bottlerocket 1.14.0 1.14.0 Not supported Not supported Not supported
RHEL 8.7 8.7 Not supported 8.7 Not supported

Note: We have updated the image-builder docs to include the latest enhancements. Please refer to the image-builder docs for more details.

Added

  • Add support for AWS CodeCommit repositories in FluxConfig with git configuration #4290
  • Add new information to the EKS Anywhere Cluster status #5628 :
    • Add the ControlPlaneInitialized, ControlPlaneReady, DefaultCNIConfigured, WorkersReady, and Ready conditions.
    • Add the observedGeneration field.
    • Add the failureReason field.
  • Add support for different machine templates for control plane, etcd, and worker node in vSphere provider #4255
  • Add support for different machine templates for control plane, etcd, and worker node in Cloudstack provider #6291
  • Add support for Kubernetes version 1.25, 1.26, and 1.27 to CloudStack provider #6167
  • Add bootstrap cluster backup in the event of cluster upgrade error #6086
  • Add support for organizing virtual machines into categories with the Nutanix provider #6014
  • Add support for configuring egressMasqueradeInterfaces option in Cilium CNI via EKS Anywhere cluster spec #6018
  • Add support for a flag for create and upgrade cluster to skip the validation --skip-validations=vsphere-user-privilege
  • Add support for upgrading control plane nodes separately from worker nodes for vSphere, Nutanix, Snow, and Cloudstack providers #6180
  • Add preflight validation to prevent skip eks-a minor version upgrades #5688
  • Add preflight check to block using kindnetd CNI in all providers except Docker [#6097]https://github.com/aws/eks-anywhere/issues/6097
  • Added feature to configure machine health checks for API managed clusters and a new way to configure health check timeouts via the EKKSA spec. [#6176]https://github.com/aws/eks-anywhere/pull/6176

Upgraded

  • Cluster API Provider vSphere: v1.6.1 to v1.7.0
  • Cluster API Provider Cloudstack: v0.4.9-rc5 to v0.4.9-rc6
  • Cluster API Provider Nutanix: v1.2.1 to v1.2.3

Cilium Upgrades

  • Cilium: v1.11.15 to v1.12.11

    Note: If you are using the vSphere provider with the Redhat OS family, there is a known issue with VMWare and the new Cilium version that only affects our Redhat variants. To prevent this from affecting your upgrade from EKS Anywhere v0.16 to v0.17, we are adding a temporary daemonset to disable UDP offloading on the nodes before upgrading Cilium. After your cluster is upgraded, the daemonset will be deleted. This note is strictly informational as this change requires no additional effort from the user.

Changed

  • Change the default node startup timeout from 10m to 20m in Bare Metal provider #5942
  • EKS Anywhere now fails on pre-flights if a user does not have required permissions. #5865
  • eksaVersion field in the cluster spec is added for better representing CLI version and dependencies in EKS-A cluster #5847
  • vSphere datacenter insecure and thumbprint is now mutable for upgrades when using full lifecycle API [6143]https://github.com/aws/eks-anywhere/issues/6143

Fixed

  • Fix cluster creation failure when the <Provider>DatacenterConfig is missing apiVersion field #6096
  • Allow registry mirror configurations to be mutable for Bottlerocket OS #2336
  • Patch an issue where mutable fields in the EKS Anywhere CloudStack API failed to trigger upgrades #5910
  • image builder: Fix runtime issue with git in image-builder v0.16.2 binary #2360
  • Bare Metal: Fix issue where metadata requests that return non-200 responses were incorrectly treated as OK #2256

Known Issues:

  • Upgrading Docker clusters from previous versions of EKS Anywhere may not work on Linux hosts due to an issue in the Cilium 1.11 to 1.12 upgrade. Docker clusters is meant solely for testing and not recommended or support for production use cases. There is currently no fixed planned.
  • If you are installing EKS Anywhere Packages, Kubernetes versions 1.23-1.25 are incompatible with Kubernetes versions 1.26-1.27 due to an API difference. This means that you may not have worker nodes on Kubernetes version <= 1.25 when the control plane nodes are on Kubernetes version >= 1.26. Therefore, if you are upgrading your control plane nodes to 1.26, you must upgrade all nodes to 1.26 to avoid failures.
  • There is a known bug with systemd >= 249 and all versions of Cilium. This is currently known to only affect Ubuntu 22.04. This will be fixed in future versions of EKS Anywhere. To work around this issue, run one of the follow options on all nodes.

Option A

# Does not persist across reboots.
sudo ip rule add from all fwmark 0x200/0xf00 lookup 2004 pref 9
sudo ip rule add from all fwmark 0xa00/0xf00 lookup 2005 pref 10
sudo ip rule add from all lookup local pref 100

Option B

# Does persist across reboots.
# Add these values /etc/systemd/networkd.conf
[Network]
ManageForeignRoutes=no
ManageForeignRoutingPolicyRules=no

Deprecated

  • The bundlesRef field in the cluster spec is now deprecated in favor of the new eksaVersion field. This field will be deprecated in three versions.

Removed

  • Installing vSphere CSI Driver as part of vSphere cluster creation. For more information on how to self-install the driver refer to the documentation here

⚠️ Breaking changes

  • CLI: --force-cleanup has been removed from create cluster, upgrade cluster and delete cluster commands. For more information on how to troubleshoot issues with the bootstrap cluster refer to the troubleshooting guide (1 and 2 ). #6384

v0.16.5

Changed

  • Bump up the worker count for etcdadm-controller from 1 to 10 #34
  • Add 2X replicas hard limit for rolling out new etcd machines #37

Fixed

  • Fix code panic in healthcheck loop in etcdadm-controller #41
  • Fix deleting out of date machines in etcdadm-controller #40

v0.16.4

Fixed

  • Fix support for having management cluster and workload cluster in different namespaces #6414

v0.16.3

Changed

  • During management cluster upgrade, if the backup of CAPI objects of all workload clusters attached to the management cluster fails before upgrade starts, EKS Anywhere will only backup the management cluster #6360
  • Update kubectl wait retry policy to retry on TLS handshake errors #6373

Removed

  • Removed the validation for checking management cluster bundle compatibility on create/upgrade workload cluster #6365

v0.16.2

Fixes

  • CLI: Ensure importing packages and bundles honors the insecure flag #6056
  • vSphere: Fix credential configuration when using the full lifecycle controller #6058
  • Bare Metal: Fix handling of Hardware validation errors in Tinkerbell full lifecycle cluster provisioning #6091
  • Bare Metal: Fix parsing of bare metal cluster configurations containing embedded PEM certs #6095

Upgrades

  • AWS Cloud Provider: v1.27.0 to v1.27.1
  • EKS Distro:
    • Kubernetes v1.24.13 to v1.24.15
    • Kubernetes v1.25.9 to v1.25.11
    • Kubernetes v1.26.4 to v1.26.6
    • Kubernetes v1.27.1 to v1.27.3
  • Cluster API Provider Snow: v0.1.25 to v0.1.26

v0.16.0

Added

  • Workload clusters full lifecycle API support for CloudStack provider (#2754 )
  • Enable proxy configuration for Bare Metal provider (#5925 )
  • Kubernetes 1.27 support (#5929 )
  • Support for upgrades for clusters with pod disruption budgets (#5697 )
  • BottleRocket network config uses mac addresses instead of interface names for configuring interfaces for the Bare Metal provider (#3411 )
  • Allow users to configure additional BottleRocket settings
    • kernel sysctl settings (#5304 )
    • boot kernel parameters (#5359 )
    • custom trusted cert bundles (#5625 )
  • Add support for IRSA on Nutanix (#5698 )
  • Add support for aws-iam-authenticator on Nutanix (#5698 )
  • Enable proxy configuration for Nutanix (#5779 )

Upgraded

  • Management cluster upgrades will only move management cluster’s components to bootstrap cluster and back. (#5914 )

Fixed

  • CloudStack control plane host port is only defaulted in CAPI objects if not provided. (#5792 ) (#5736 )

Deprecated

  • Add warning to deprecate disableCSI through CLI (#5918 ). Refer to the deprecation section in the vSphere provider documentation for more information.

Removed

  • Kubernetes 1.22 support

v0.15.4

Fixed

  • Add validation for tinkerbell ip for workload cluster to match management cluster (#5798 )
  • Update datastore usage validation to account for space that will free up during upgrade (#5524 )
  • Expand GITHUB_TOKEN regex to support fine-grained access tokens (#5764 )
  • Display the timeout flags in CLI help (#5637 )

v0.15.3

Added

  • Added bundles-override to package cli commands (#5695 )

Fixed

v0.15.2

Supported OS version details

vSphere Baremetal Nutanix Cloudstack Snow
Ubuntu 20.04 20.04 20.04 Not supported 20.04
Bottlerocket 1.13.1 1.13.1 Not supported Not supported Not supported
RHEL 8.7 8.7 Not supported 8.7 Not supported

Added

  • Support for no-timeouts to more EKS Anywhere operations (#5565 )

Changed

  • Use kubectl for kube-proxy upgrader calls (#5609 )

Fixed

  • Fixed the failure to delete a Tinkerbell workload cluster due to an incorrect SSH key update during reconciliation (#5554 )
  • Fixed machineGroupRef updates for CloudStack and Vsphere (#5313 )

v0.15.1

Supported OS version details

vSphere Baremetal Nutanix Cloudstack Snow
Ubuntu 20.04 20.04 20.04 Not supported 20.04
Bottlerocket 1.13.1 1.13.1 Not supported Not supported Not supported
RHEL 8.7 8.7 Not supported 8.7 Not supported

Added

  • Kubernetes 1.26 support

Upgraded

  • Cilium updated from version v1.11.10 to version v1.11.15

Fixed

  • Fix http client in file reader to honor the provided HTTP_PROXY, HTTPS_PROXY and NO_PROXY env variables (#5488 )

v0.15.0

Supported OS version details

vSphere Baremetal Nutanix Cloudstack Snow
Ubuntu 20.04 20.04 20.04 Not supported 20.04
Bottlerocket 1.13.1 1.13.1 Not supported Not supported Not supported
RHEL 8.7 8.7 Not supported 8.7 Not supported

Added

  • Workload clusters full lifecycle API support for Bare Metal provider (#5237 )
  • IRSA support for Bare Metal (#4361 )
  • Support for mixed disks within the same node grouping for BareMetal clusters (#3234 )
  • Workload clusters full lifecycle API support for Nutanix provider (#5190 )
  • OIDC support for Nutanix (#4711 )
  • Registry mirror support for Nutanix (#5236 )
  • Support for linking EKS Anywhere node VMs to Nutanix projects (#5266 )
  • Add CredentialsRef to NutanixDatacenterConfig to specify Nutanix credentials for workload clusters (#5114 )
  • Support for taints and labels for Nutanix provider (#5172 )
  • Support for InsecureSkipVerify for RegistryMirrorConfiguration across all providers. Currently only supported for Ubuntu and RHEL OS. (#1647 )
  • Support for configuring of Bottlerocket settings. (#707 )
  • Support for using a custom CNI (#5217 )
  • Ability to configure NTP servers on EKS Anywhere nodes for vSphere and Tinkerbell providers (#4760 )
  • Support for nonRootVolumes option in SnowMachineConfig (#5199 )
  • Validate template disk size with vSphere provider using Bottlerocket (#1571 )
  • Allow users to specify cloneMode for different VSphereMachineConfig (#4634 )
  • Validate management cluster bundles version is the same or newer than bundle version used to upgrade a workload cluster(#5105 )
  • Set hostname for Bottlerocket nodes (#3629 )
  • Curated Package controller as a package (#831 )
  • Curated Package Credentials Package (#829 )
  • Enable Full Cluster Lifecycle for curated packages (#807 )
  • Curated Package Controller Configuration in Cluster Spec (#5031 )

Upgraded

  • Bottlerocket upgraded from v1.13.0 to v1.13.1
  • Upgrade EKS Anywhere admin AMI to Kernel 5.15
  • Tinkerbell stack upgraded (#3233 ):
    • Cluster API Provider Tinkerbell v0.4.0
    • Hegel v0.10.1
    • Rufio v0.2.1
    • Tink v0.8.0
  • Curated Package Harbor upgraded from 2.5.1 to 2.7.1
  • Curated Package Prometheus upgraded from 2.39.1 to 2.41.0
  • Curated Package Metallb upgraded from 0.13.5 to 0.13.7
  • Curated Package Emissary upgraded from 3.3.0 to 3.5.1

Fixed

  • Applied a patch that fixes vCenter sessions leak (#1767 )

Breaking changes

  • Removed support for Kubernetes 1.21

v0.14.6

Fixed

  • Fix clustermanager no-timeouts option (#5445 )

v0.14.5

Fixed

  • Fix kubectl get call to point to full API name (#5342 )
  • Expand all kubectl calls to fully qualified names (#5347 )

v0.14.4

Added

  • --no-timeouts flag in create and upgrade commands to disable timeout for all wait operations
  • Management resources backup procedure with clusterctl

v0.14.3

Added

  • --aws-region flag to copy packages command.

Upgraded

  • CAPAS from v0.1.22 to v0.1.24.

v0.14.2

Added

  • Enabled support for Kubernetes version 1.25

v0.14.1

Added

  • support for authenticated pulls from registry mirror (#4796 )
  • option to override default nodeStartupTimeout in machine health check (#4800 )
  • Validate control plane endpoint with pods and services CIDR blocks(#4816 )

Fixed

  • Fixed a issue where registry mirror settings weren’t being applied properly on Bottlerocket nodes for Tinkerbell provider

v0.14.0

Added

  • Add support for EKS Anywhere on AWS Snow (#1042 )
  • Static IP support for BottleRocket (#4359 )
  • Add registry mirror support for curated packages
  • Add copy packages command (#4420 )

Fixed

  • Improve management cluster name validation for workload clusters

v0.13.1

Added

  • Multi-region support for all supported curated packages

Fixed

  • Fixed nil pointer in eksctl anywhere upgrade plan command

v0.13.0

Added

  • Workload clusters full lifecycle API support for vSphere and Docker (#1090 )
  • Single node cluster support for Bare Metal provider
  • Cilium updated to version v1.11.10
  • CLI high verbosity log output is automatically included in the support bundle after a CLI cluster command error (#1703 implemented by #4289 )
  • Allow to configure machine health checks timeout through a new flag --unhealthy-machine-timeout (#3918 implemented by #4123 )
  • Ability to configure rolling upgrade for Bare Metal and Cloudstack via maxSurge and maxUnavailable parameters
  • New Nutanix Provider
  • Workload clusters support for Bare Metal
  • VM Tagging support for vSphere VM’s created in the cluster (#4228 )
  • Support for new curated packages:
    • Prometheus v2.39.1
  • Updated curated packages' versions:
    • ADOT v0.23.0 upgraded from v0.21.1
    • Emissary v3.3.0 upgraded from v3.0.0
    • Metallb v0.13.7 upgraded from v0.13.5
  • Support for packages controller to create target namespaces #601
  • (For more EKS Anywhere packages info: v0.13.0 )

Fixed

  • Kubernetes version upgrades from 1.23 to 1.24 for Docker clusters (#4266 )
  • Added missing docker login when doing authenticated registry pulls

Breaking changes

  • Removed support for Kubernetes 1.20

v0.12.2

Added

  • Add support for Kubernetes 1.24 (CloudStack support to come in future releases)#3491

Fixed

  • Fix authenticated registry mirror validations
  • Fix capc bug causing orphaned VM’s in slow environments
  • Bundle activation problem for package controller

v0.12.1

Changed

  • Setting minimum wait time for nodes and machinedeployments (#3868, fixes #3822)

Fixed

  • Fixed worker node count pointer dereference issue (#3852)
  • Fixed eks-anywhere-packages reference in go.mod (#3902)
  • Surface dropped error in Cloudstack validations (#3832)

v0.12.0

⚠️ Breaking changes

  • Certificates signed with SHA-1 are not supported anymore for Registry Mirror. Users with a registry mirror and providing a custom CA cert will need to rotate the certificate served by the registry mirror endpoint before using the new EKS-A version. This is true for both new clusters (create cluster command) and existing clusters (upgrade cluster command).
  • The --source option was removed from several package commands. Use either --kube-version for registry or --cluster for cluster.

Added

  • Add support for EKS Anywhere with provider CloudStack
  • Add support to upgrade Bare Metal cluster
  • Add support for using Registry Mirror for Bare Metal
  • Redhat-based node image support for vSphere, CloudStack and Bare Metal EKS Anywhere clusters
  • Allow authenticated image pull using Registry Mirror for Ubuntu on vSphere cluster
  • Add option to disable vSphere CSI driver #3148
  • Add support for skipping load balancer deployment for Bare Metal so users can use their own load balancers #3608
  • Add support to configure aws-iam-authenticator on workload clusters independent of management cluster #2814
  • Add EKS Anywhere Packages support for remote management on workload clusters. (For more EKS Anywhere packages info: v0.12.0 )
  • Add new EKS Anywhere Packages
    • AWS Distro for OpenTelemetry (ADOT)
    • Cert Manager
    • Cluster Autoscaler
    • Metrics Server

Fixed

  • Remove special cilium network policy with policyEnforcementMode set to always due to lack of pod network connectivity for vSphere CSI
  • Fixed #3391 #3560 for AWSIamConfig upgrades on EKS Anywhere workload clusters

v0.11.4

Added

  • Add validate session permission for vsphere

Fixed

  • Fix datacenter naming bug for vSphere #3381
  • Fix os family validation for vSphere
  • Fix controller overwriting secret for vSphere #3404
  • Fix unintended rolling upgrades when upgrading from an older EKS-A version for CloudStack

v0.11.3

Added

  • Add some bundleRef validation
  • Enable kube-rbac-proxy on CloudStack cluster controller’s metrics port

Fixed

  • Fix issue with fetching EKS-D CRDs/manifests with retries
  • Update BundlesRef when building a Spec from file
  • Fix worker node upgrade inconsistency in Cloudstack

v0.11.2

Added

  • Add a preflight check to validate vSphere user’s permissions #2744

Changed

  • Make DiskOffering in CloudStackMachineConfig optional

Fixed

  • Fix upgrade failure when flux is enabled #3091 #3093
  • Add token-refresher to default images to fix import/download images commands
  • Improve retry logic for transient issues with kubectl applies and helm pulls #3167
  • Fix issue fetching curated packages images

v0.11.1

Added

  • Add --insecure flag to import/download images commands #2878

v0.11.0

Breaking Changes

  • EKS Anywhere no longer distributes Ubuntu OVAs for use with EKS Anywhere clusters. Building your own Ubuntu-based nodes as described in Building node images is the only supported way to get that functionality.

Added

  • Add support for Kubernetes 1.23 #2159
  • Add support for Support Bundle for validating control plane IP with vSphere provider
  • Add support for aws-iam-authenticator on Bare Metal
  • Curated Packages General Availability
  • Added Emissary Ingress Curated Package

Changed

  • Install and enable GitOps in the existing cluster with upgrade command

v0.10.1

Changed

  • Updated EKS Distro versions to latest release

Fixed

  • Fixed control plane nodes not upgraded for same kube version #2636

v0.10.0

Added

  • Added support for EKS Anywhere on bare metal with provider tinkerbell . EKS Anywhere on bare metal supports complete provisioning cycle, including power on/off and PXE boot for standing up a cluster with the given hardware data.
  • Support for node CIDR mask config exposed via the cluster spec. #488

Changed

  • Upgraded cilium from 1.9 to 1.10. #1124
  • Changes for EKS Anywhere packages v0.10.0

Fixed

  • Fix issue using self-signed certificates for registry mirror #1857

v0.9.2

Fixed

  • Fix issue by avoiding processing Snow images when URI is empty

v0.9.1

v0.9.0

Added

  • Adding support to EKS Anywhere for a generic git provider as the source of truth for GitOps configuration management. #9
  • Allow users to configure Cloud Provider and CSI Driver with different credentials. #1730
  • Support to install, configure and maintain operational components that are secure and tested by Amazon on EKS Anywhere clusters.#2083
  • A new Workshop section has been added to EKS Anywhere documentation.
  • Added support for curated packages behind a feature flag #1893

Fixed

  • Fix issue specifying proxy configuration for helm template command #2009

v0.8.2

Fixed

  • Fix issue with upgrading cluster from a previous minor version #1819

v0.8.1

Fixed

  • Fix issue with downloading artifacts #1753

v0.8.0

Added

  • SSH keys and Users are now mutable #1208
  • OIDC configuration is now mutable #676
  • Add support for Cilium’s policy enforcement mode #726

Changed

  • Install Cilium networking through Helm instead of static manifest

v0.7.2 - 2022-02-28

Fixed

  • Fix issue with downloading artifacts #1327

v0.7.1 - 2022-02-25

Added

  • Support for taints in worker node group configurations #189
  • Support for taints in control plane configurations #189
  • Support for labels in worker node group configuration #486
  • Allow removal of worker node groups using the eksctl anywhere upgrade command #1054

v0.7.0 - 2022-01-27

Added

  • Support for aws-iam-authenticator as an authentication option in EKS-A clusters #90
  • Support for multiple worker node groups in EKS-A clusters #840
  • Support for IAM Role for Service Account (IRSA) #601
  • New command upgrade plan cluster lists core component changes affected by upgrade cluster #499
  • Support for workload cluster’s control plane and etcd upgrade through GitOps #1007
  • Upgrading a Flux managed cluster previously required manual steps. These steps have now been automated. #759 , #1019
  • Cilium CNI will now be upgraded by the upgrade cluster command #326

Changed

  • EKS-A now uses Cluster API (CAPI) v1.0.1 and v1beta1 manifests, upgrading from v0.3.23 and v1alpha3 manifests.
  • Kubernetes components and etcd now use TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 as the configured TLS cipher suite #657 , #759
  • Automated git repository structure changes during Flux component upgrade workflow #577

v0.6.0 - 2021-10-29

Added

  • Support to create and manage workload clusters #94
  • Support for upgrading eks-anywhere components #93 , Cluster upgrades
    • IMPORTANT: Currently upgrading existing flux managed clusters requires performing a few additional steps . The fix for upgrading the existing clusters will be published in 0.6.1 release to improve the upgrade experience.
  • k8s CIS compliance #193
  • Support bundle improvements #92
  • Ability to upgrade control plane nodes before worker nodes #100
  • Ability to use your own container registry #98
  • Make namespace configurable for anywhere resources #177

Fixed

  • Fix ova auto-import issue for multi-datacenter environments #437
  • OVA import via EKS-A CLI sometimes fails #254
  • Add proxy configuration to etcd nodes for bottlerocket #195

Removed

  • overrideClusterSpecFile field in cluster config

v0.5.0

Added

  • Initial release of EKS-A

2.2 - Release Alerts

SNS Alerts for EKS Anywhere releases

EKS Anywhere uses Amazon Simple Notification Service (SNS) to notify availability of a new release. It is recommended that your clusters are kept up to date with the latest EKS Anywhere release. Please follow the instructions below to subscribe to SNS notification.

  • Sign in to your AWS Account
  • Select us-east-1 region
  • Go to the SNS Console
  • In the left navigation pane, choose “Subscriptions”
  • On the Subscriptions page, choose “Create subscription”
  • On the Create subscription page, in the Details section enter the following information
    • Topic ARN
      arn:aws:sns:us-east-1:153288728732:eks-anywhere-updates
      
    • Protocol - Email
    • Endpoint - Your preferred email address
  • Choose Create Subscription
  • In few minutes, you will receive an email asking you to confirm the subscription
  • Click the confirmation link in the email

3 - Concepts

The Concepts section contains an overview of the EKS Anywhere architecture, components, versioning, and support.

Most of the content in the EKS Anywhere documentation is specific to how EKS Anywhere deploys and manages Kubernetes clusters. For information on Kubernetes itself, reference the Kubernetes documentation.

3.1 - EKS Anywhere Architecture

EKS Anywhere architecture overview

EKS Anywhere supports many different types of infrastructure including VMWare vSphere, bare metal, Nutanix, Apache CloudStack, and AWS Snow. EKS Anywhere is built on the Kubernetes sub-project called Cluster API (CAPI), which is focused on providing declarative APIs and tooling to simplify the provisioning, upgrading, and operating of multiple Kubernetes clusters. EKS Anywhere inherits many of the same architectural patterns and concepts that exist in CAPI. Reference the CAPI documentation to learn more about the core CAPI concepts.

Components

Each EKS Anywhere version includes all components required to create and manage EKS Anywhere clusters.

Administrative / CLI components

Responsible for lifecycle operations of management or standalone clusters, building images, and collecting support diagnostics. Admin / CLI components run on Admin machines or image building machines.

Component Description
eksctl CLI Command-line tool to create, upgrade, and delete management, standalone, and optionally workload clusters.
image-builder Command-line tool to build Ubuntu and RHEL node images
diagnostics collector Command-line tool to produce support diagnostics bundle

Management components

Responsible for infrastructure and cluster lifecycle management (create, update, upgrade, scale, delete). Management components run on standalone or management clusters.

Component Description
CAPI controller Controller that manages core Cluster API objects such as Cluster, Machine, MachineHealthCheck etc.
EKS Anywhere lifecycle controller Controller that manages EKS Anywhere objects such as EKS Anywhere Clusters, EKS-A Releases, FluxConfig, GitOpsConfig, AwsIamConfig, OidcConfig
Curated Packages controller Controller that manages EKS Anywhere Curated Package objects
Kubeadm controller Controller that manages Kubernetes control plane objects
Etcdadm controller Controller that manages etcd objects
Provider-specific controllers Controller that interacts with infrastructure provider (vSphere, bare metal etc.) and manages the infrastructure objects
EKS Anywhere CRDs Custom Resource Definitions that EKS Anywhere uses to define and control infrastructure, machines, clusters, and other objects

Cluster components

Components that make up a Kubernetes cluster where applications run. Cluster components run on standalone, management, and workload clusters.

Component Description
Kubernetes Kubernetes components that include kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kubectl
etcd Etcd database used for Kubernetes control plane datastore
Cilium Container Networking Interface (CNI)
CoreDNS In-cluster DNS
kube-proxy Network proxy that runs on each node
containerd Container runtime
kube-vip Load balancer that runs on control plane to balance control plane IPs

Deployment Architectures

EKS Anywhere supports two deployment architectures:

  • Standalone clusters: If you are only running a single EKS Anywhere cluster, you can deploy a standalone cluster. This deployment type runs the EKS Anywhere management components on the same cluster that runs workloads. Standalone clusters must be managed with the eksctl CLI. A standalone cluster is effectively a management cluster, but in this deployment type, only manages itself.

  • Management cluster with separate workload clusters: If you plan to deploy multiple EKS Anywhere clusters, it’s recommended to deploy a management cluster with separate workload clusters. With this deployment type, the EKS Anywhere management components are only run on the management cluster, and the management cluster can be used to perform cluster lifecycle operations on a fleet of workload clusters. The management cluster must be managed with the eksctl CLI, whereas workload clusters can be managed with the eksctl CLI or with Kubernetes API-compatible clients such as kubectl, GitOps, or Terraform.

If you use the management cluster architecture, the management cluster must run on the same infrastructure provider as your workload clusters. For example, if you run your management cluster on vSphere, your workload clusters must also run on vSphere. If you run your management cluster on bare metal, your workload cluster must run on bare metal. Similarly, all nodes in workload clusters must run on the same infrastructure provider. You cannot have control plane nodes on vSphere, and worker nodes on bare metal.

Both deployment architectures can run entirely disconnected from the internet and AWS Cloud. For information on deploying EKS Anywhere in airgapped environments, reference the Airgapped Installation page.

Standalone Clusters

Technically, standalone clusters are the same as management clusters, with the only difference being that standalone clusters are only capable of managing themselves. Regardless of the deployment architecture you choose, you always start by creating a standalone cluster from an Admin machine. When you first create a standalone cluster, a temporary Kind bootstrap cluster is used on your Admin machine to pull down the required components and bootstrap your standalone cluster on the infrastructure of your choice.

Standalone clusters self-manage and can run applications

Management Clusters

Management clusters are long-lived EKS Anywhere clusters that can create and manage a fleet of EKS Anywhere workload clusters. Management clusters run both management and cluster components. Workload clusters run cluster components only and are where your applications run. Management clusters enable you to centrally manage your workload clusters with Kubernetes API-compatible clients such as kubectl, GitOps, or Terraform, and prevent management components from interfering with the resource usage of your applications running on workload clusters.

Management clusters can create and manage multiple workload clusters

3.2 - Versioning

EKS Anywhere and Kubernetes version support policy and release cycle

This page contains information on the EKS Anywhere release cycle and support for Kubernetes versions.

When creating new clusters, we recommend that you use the latest available Kubernetes version supported by EKS Anywhere. If your application requires a specific version of Kubernetes, you can select older versions. You can create new EKS Anywhere clusters on any Kubernetes version that the EKS Anywhere version supports.

You must have an EKS Anywhere Enterprise Subscription to receive support for EKS Anywhere from AWS.

Kubernetes versions

Each EKS Anywhere version includes support for multiple Kubernetes minor versions.

The release and support schedule for Kubernetes versions in EKS Anywhere aligns with the Amazon EKS standard support schedule as documented on the Amazon EKS Kubernetes release calendar. A minor Kubernetes version is under standard support in EKS Anywhere for 14 months after it’s released in EKS Anywhere. EKS Anywhere currently does not offer extended version support for Kubernetes versions. If you are interested in extended version support for Kubernetes versions in EKS Anywhere, please upvote or comment on EKS Anywhere GitHub Issue #6793. Patch releases for Kubernetes versions are included in EKS Anywhere as they become available in EKS Distro.

Unlike Amazon EKS, there are no automatic upgrades in EKS Anywhere and you have full control over when you upgrade. On the end of support date, you can still create new EKS Anywhere clusters with the unsupported Kubernetes version if the EKS Anywhere version you are using includes it. Any existing EKS Anywhere clusters with the unsupported Kubernetes version continue to function. As new Kubernetes versions become available in EKS Anywhere, we recommend that you proactively update your clusters to use the latest available Kubernetes version to remain on versions that receive CVE patches and bug fixes.

Reference the table below for release and support dates for each Kubernetes version in EKS Anywhere. The Release Date column denotes the EKS Anywhere release date when the Kubernetes version was first supported in EKS Anywhere. Note, dates with only a month and a year are approximate and are updated with an exact date when it’s known.

Kubernetes Version Release Date Support End
1.29 February 2, 2024 March, 2025
1.28 October 10, 2023 December, 2024
1.27 June 6, 2023 August, 2024
1.26 March 3, 2023 June, 2024
1.25 January 1, 2023 May, 2024
1.24 October 10, 2022 February 2, 2024
1.23 August 8, 2022 October 10, 2023
1.22 March 3, 2022 June 6, 2023
  • Older Kubernetes versions are omitted from this table for brevity, reference the EKS Anywhere GitHub for older versions.

EKS Anywhere versions

Each EKS Anywhere version includes all components required to create and manage EKS Anywhere clusters. This includes but is not limited to:

  • Administrative / CLI components (eksctl CLI, image-builder, diagnostics-collector)
  • Management components (Cluster API controller, EKS Anywhere controller, provider-specific controllers)
  • Cluster components (Kubernetes, Cilium)

You can find details about each EKS Anywhere release in the EKS Anywhere release manifest . The release manifest contains references to the corresponding bundle manifest for each EKS Anywhere version. Within the bundle manifest, you will find the components included in a specific EKS Anywhere version. The images running in your deployment use the same URI values specified in the bundle manifest for that component. For example, see the bundle manifest for EKS Anywhere version v0.20.2.

Starting in 2024, EKS Anywhere follows a 4-month release cadence for minor versions and a 2-week cadence for patch versions. Common vulnerabilities and exposures (CVE) patches and bug fixes, including those for the supported Kubernetes versions, are included in the latest EKS Anywhere minor version (version N). High and critical CVE fixes and bug fixes are also backported to the penultimate EKS Anywhere minor version (version N-1), which follows a monthly patch release cadence.

Reference the table below for release dates and patch support for each EKS Anywhere version. This table shows the Kubernetes versions that are supported in each EKS Anywhere version.

EKS Anywhere Version Supported Kubernetes Versions Release Date Receiving Patches
0.19 1.29, 1.28, 1.27, 1.26, 1.25 February 2, 2024 Yes
0.18 1.28, 1.27, 1.26, 1.25, 1.24 October 10, 2023 No
0.17 1.27, 1.26, 1.25, 1.24, 1.23 August 8, 2023 No
0.16 1.27, 1.26, 1.25, 1.24, 1.23 June 6, 2023 No
0.15 1.26, 1.25, 1.24, 1.23, 1.22 March 3, 2023 No
0.14 1.25, 1.24, 1.23, 1.22, 1.21 January 1, 2023 No
0.13 1.24, 1.23, 1.22, 1.21 December 12, 2022 No
0.12 1.24, 1.23, 1.22, 1.21, 1.20 October 10, 2022 No
0.11 1.23, 1.22, 1.21, 1.20 August 8, 2022 No
0.10 1.22, 1.21, 1.20 June 6, 2022 No
0.9 1.22, 1.21, 1.20 May 5, 2022 No
0.8 1.22, 1.21, 1.20 March 3, 2022 No
  • Older EKS Anywhere versions are omitted from this table for brevity, reference the EKS Anywhere GitHub for older versions.

Operating System versions

Bottlerocket, Ubuntu, and Red Hat Enterprise Linux (RHEL) can be used as operating systems for nodes in EKS Anywhere clusters. Reference the table below for operating system version support in EKS Anywhere. For information on operating system management in EKS Anywhere, reference the Operating System Management Overview page

OS OS Versions Supported EKS Anywhere version
Ubuntu 22.04 0.17 and above
20.04 0.5 and above
Bottlerocket 1.19.1 0.19
1.15.1 0.18
1.13.1 0.15-0.17
1.12.0 0.14
1.10.1 0.12
RHEL 9.x* 0.18
RHEL 8.x 0.12 and above

*CloudStack and Nutanix only

  • For details on supported operating systems for Admin machines, see the Admin Machine page.
  • Older Bottlerocket versions are omitted from this table for brevity

Frequently Asked Questions (FAQs)

Where can I find details of what changed in an EKS Anywhere version?

For changes included in an EKS Anywhere version, reference the EKS Anywhere Changelog.

Will I get notified when there is a new EKS Anywhere version release?

You will get notified if you have subscribed as documented on the Release Alerts page.

Does Amazon EKS extended support for Kubernetes versions apply to EKS Anywhere clusters?

No. Amazon EKS extended support for Kubernetes versions does not apply to EKS Anywhere at this time. To request this capability, please comment or upvote on this EKS Anywhere GitHub issue .

What happens on the end of support date for a Kubernetes version?

Unlike Amazon EKS, there are no forced upgrades in EKS Anywhere. On the end of support date, you can still create new EKS Anywhere clusters with the unsupported Kubernetes version if the EKS Anywhere version you are using includes it. Any existing EKS Anywhere clusters with the unsupported Kubernetes version will continue to function. However, you will not be able to receive CVE patches or bug fixes for the unsupported Kubernetes version. Troubleshooting support, configuration guidance, and upgrade assistance is available for all Kubernetes and EKS Anywhere versions for customers with EKS Anywhere Enterprise Subscriptions.

What EKS Anywhere versions are supported if you have the EKS Anywhere Enterprise Subscription?

If you have purchased an EKS Anywhere Enterprise Subscription, AWS will provide troubleshooting support, configuration guidance, and upgrade assistance for your licensed clusters, irrespective of the EKS Anywhere version it’s running on. However, as the CVE patches and bug fixes are only included in the latest and penultimate EKS Anywhere versions, it is recommended to use either of these releases to manage your deployments and keep them up to date. With an EKS Anywhere Enterprise Subscription, AWS will assist you in upgrading your licensed clusters to the latest EKS Anywhere version.

Can I use different EKS Anywhere minor versions for my management cluster and workload clusters?

Yes, the management cluster can be upgraded to newer EKS Anywhere versions than the workload clusters that it manages. However, we only support a maximum skew of one EKS Anywhere minor version for management and workload clusters. This means the management cluster can be at most one EKS Anywhere minor version newer than the workload clusters (ie. management cluster with v0.18.x and workload clusters with v0.17.x). In the event that you want to upgrade your management cluster to a version that does not satisfy this condition, we recommend upgrading the workload cluster’s EKS Anywhere version first to match the current management cluster’s EKS Anywhere version, followed by an upgrade to your desired EKS Anywhere version for the management cluster.

NOTE: Workload clusters can only be created with or upgraded to the same EKS Anywhere version that the management cluster was created with. For example, if you create your management cluster with v0.18.0, you can only create workload clusters with v0.18.0. However, if you create your management cluster with version v0.17.0 and then upgrade to v0.18.0, you can create workload clusters with either v0.17.0 or v0.18.0.

Can I skip EKS Anywhere minor versions during cluster upgrade (such as going from v0.16 directly to v0.18)?

No. We perform regular upgrade reliability testing for sequential version upgrade (ie. going from version 0.16 to 0.17, then from version 0.17 to 0.18), but we do not perform testing on non-sequential upgrade path (ie. going from version 0.16 directly to 0.18). You should not skip minor versions during cluster upgrade. However, you can choose to skip patch versions.

What is the difference between an EKS Anywhere minor version versus a patch version?

An EKS Anywhere minor version includes new EKS Anywhere capabilities, bug fixes, security patches, and new Kubernetes minor versions if they are available. An EKS Anywhere patch version generally includes only bug fixes, security patches, and Kubernetes patch version increments. EKS Anywhere patch versions are released more frequently than EKS Anywhere minor versions so you can receive the latest security and bug fixes sooner. For example, patch releases for the latest version follow a biweekly release cadence and those for the penultimate EKS Anywhere version follow a monthly cadence.

What kind of fixes are patched in the latest EKS Anywhere minor version?

The latest EKS Anywhere minor version will receive CVE patches and bug fixes for EKS Anywhere components and the Kubernetes versions that are supported by the corresponding EKS Anywhere version. New curated packages versions, if any, will be made available as upgrades for this minor version.

What kind of fixes are patched in the penultimate EKS Anywhere minor version?

The penultimate EKS Anywhere minor version will receive only high and critical CVE patches and updates only to those Kubernetes versions that are supported by both the corresponding EKS Anywhere version as well as EKS Distro. New curated packages versions, if any, will be made available as upgrades for this minor version.

Will I get notified when support is ending for a Kubernetes version on EKS Anywhere?

Not automatically. You should check this page regularly and take note of the end of support date for the Kubernetes version you’re using.

3.3 - Support

Overview of support for EKS Anywhere

EKS Anywhere is available as open source software that you can run on hardware in your data center or edge environment.

You can purchase EKS Anywhere Enterprise Subscriptions for 24/7 support from AWS subject matter experts and access to EKS Anywhere Curated Packages. You can only receive support for your EKS Anywhere clusters that are licensed under an active EKS Anywhere Enterprise Subscription. EKS Anywhere Enterprise Subscriptions are available for a 1-year or 3-year term, and are priced on a per cluster basis.

EKS Anywhere Enterprise Subscriptions include support for the following components:

  • EKS Distro (see documentation for components)
  • EKS Anywhere core components such as the Cilium CNI, Flux GitOps controller, kube-vip, EKS Anywhere CLI, EKS Anywhere controllers, image builder, and EKS Connector
  • EKS Anywhere Curated Packages (see curated packages list for list of packages)
  • EKS Anywhere cluster lifecycle operations such as creating, scaling, and upgrading
  • EKS Anywhere troubleshooting, general guidance, and best practices
  • Bottlerocket node operating system

Visit the following links for more information on EKS Anywhere Enterprise Subscriptions

If you are using EKS Anywhere and have not purchased a subscription, you can file an issue in the EKS Anywhere GitHub Repository, and someone will get back to you as soon as possible. If you discover a potential security issue in this project, we ask that you notify AWS/Amazon Security via the vulnerability reporting page. Please do not create a public GitHub issue for security problems.

FAQs

1. How much does an EKS Anywhere Enterprise Subscription cost?

For pricing information, visit the EKS Anywhere Pricing page.

2. How can I purchase an EKS Anywhere Enterprise Subscription?

Reference the Purchase Subscriptions documentation for instructions on how to purchase.

3. Are subscriptions I previously purchased manually integrated into the EKS console?

No, EKS Anywhere Enterprise Subscriptions purchased manually before October 2023 cannot be viewed or managed through the EKS console, APIs, and AWS CLI.

4. Can I cancel my subscription in the EKS console, APIs, and AWS CLI?

You can cancel your subscription within the first 7 days of purchase by filing an AWS Support ticket. When you cancel your subscription within the first 7 days, you are not charged for the subscription. To cancel your subscription outside of the 7-day time period, contact your AWS account team.

5. Can I cancel my subscription after I use it to file an AWS Support ticket?

No, if you have used your subscription to file an AWS Support ticket requesting EKS Anywhere support, then we are unable to cancel the subscription or refund the purchase regardless of the 7-day grace period, since you have leveraged support as part of the subscription.

6. In which AWS Regions can I purchase subscriptions?

You can purchase subscriptions in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Europe (Zurich), Israel (Tel Aviv), Middle East (Bahrain), Middle East (UAE), and South America (Sao Paulo).

7. Can I renew my subscription through the EKS console, APIs, and AWS CLI?

Yes, you can configure auto renewal during subscription creation or at any time during your subscription term. When auto renewal is enabled for your subscription, the subscription and associated licenses will be automatically renewed for the term of the existing subscription (1-year or 3-years). The 7-day cancellation period does not apply to renewals. You do not need to reapply licenses to your EKS Anywhere clusters when subscriptions are automatically renewed.

8. Can I edit my subscription through the EKS console, APIs, and AWS CLI?

You can edit the auto renewal and tags configurations for your subscription with the EKS console, APIs, and AWS CLI. To change the term or license quantity for a subscription, you must create a new subscription.

9. What happens when a subscription expires?

When subscriptions expire, licenses associated with the subscription can no longer be used for new support tickets, access to EKS Anywhere Curated Packages is revoked, and you are no longer billed for the subscription. Support tickets created during the active subscription period will continue to be serviced. You will receive emails 3 months, 1 month, and 1 week before subscriptions expire, and an alert is presented in the EKS console for approaching expiration dates. Subscriptions can be viewed with the EKS console, APIs, and AWS CLI after expiration.

10. Can I share access to curated packages with other AWS accounts?

Yes, reference the Share curated packages access documentation for instructions on how to share access to curated packages with other AWS accounts in your organization.

11. How do I apply licenses to my EKS Anywhere clusters?

Reference the License cluster documentation for instructions on how to apply licenses your EKS Anywhere clusters.

12. Is there an option to pay for subscriptions upfront?

If you need to pay upfront for subscriptions, please contact your AWS account team.

13. Is there a free-trial option for subscriptions?

To request a free-trial, please contact your AWS account team.

3.4 - EKS Anywhere Curated Packages

Overview of EKS Anywhere Curated Packages

Overview

Amazon EKS Anywhere Curated Packages are Amazon-curated software packages that extend the core functionalities of Kubernetes on your EKS Anywhere clusters. If you operate EKS Anywhere clusters on-premises, you probably install additional software to ensure the security and reliability of your clusters. However, you may be spending a lot of effort researching for the right software, tracking updates, and testing them for compatibility. Now with the EKS Anywhere Curated Packages, you can rely on Amazon to provide trusted, up-to-date, and compatible software that are supported by Amazon, reducing the need for multiple vendor support agreements.

  • Amazon-built: All container images of the packages are built from source code by Amazon, including the open source (OSS) packages. OSS package images are built from the open source upstream.
  • Amazon-scanned: Amazon scans the container images including the OSS package images daily for security vulnerabilities and provides remediation.
  • Amazon-signed: Amazon signs the package bundle manifest (a Kubernetes manifest) for the list of curated packages. The manifest is signed with AWS Key Management Service (AWS KMS) managed private keys. The curated packages are installed and managed by a package controller on the clusters. Amazon provides validation of signatures through an admission control webhook in the package controller and the public keys distributed in the bundle manifest file.
  • Amazon-tested: Amazon tests the compatibility of all curated packages including the OSS packages with each new version of EKS Anywhere.
  • Amazon-supported: All curated packages including the curated OSS packages are supported under the EKS Anywhere Support Subscription.

The main components of EKS Anywhere Curated Packages are the package controller , the package build artifacts and the command line interface . The package controller will run in a pod in an EKS Anywhere cluster. The package controller will manage the lifecycle of all curated packages.

Curated packages

Please check out curated package list for the complete list of EKS Anywhere curated packages.

FAQ

  1. Can I install software not from the curated package list?

    Yes. You can install any optional software of your choice. Be aware you cannot use EKS Anywhere tooling to install or update your self-managed software. Amazon does not provide testing, security patching, software updates, or customer support for your self-managed software.

  2. Can I install software that’s on the curated package list but not sourced from EKS Anywhere repository?

    If, for example, you deploy a Harbor image that is not built and signed by Amazon, Amazon will not provide testing or customer support to your self-built images.

Curated package list

Name Description Versions GitHub
ADOT ADOT Collector is an AWS distribution of the OpenTelemetry Collector, which provides a vendor-agnostic solution to receive, process and export telemetry data. v0.25.0 https://github.com/aws-observability/aws-otel-collector
Cert-manager Cert-manager is a certificate manager for Kubernetes clusters. v1.9.1 https://github.com/cert-manager/cert-manager
Cluster Autoscaler Cluster Autoscaler is a component that automatically adjusts the size of a Kubernetes Cluster so that all pods have a place to run and there are no unneeded nodes. v9.21.0 https://github.com/kubernetes/autoscaler
Emissary Ingress Emissary Ingress is an open source Ingress supporting API Gateway + Layer 7 load balancer built on Envoy Proxy. v3.3.0 https://github.com/emissary-ingress/emissary/
Harbor Harbor is an open source trusted cloud native registry project that stores, signs, and scans content. v2.7.1
v2.5.1
https://github.com/goharbor/harbor
https://github.com/goharbor/harbor-helm
MetalLB MetalLB is a virtual IP provider for services of type LoadBalancer supporting ARP and BGP. v0.13.7 https://github.com/metallb/metallb/
Metrics Server Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines. v3.8.2 https://github.com/kubernetes-sigs/metrics-server
Prometheus Prometheus is an open-source systems monitoring and alerting toolkit that collects and stores metrics as time series data. v2.41.0 https://github.com/prometheus/prometheus

3.5 - Compare EKS Anywhere and EKS

Comparing EKS Anywhere features to Amazon EKS

EKS Anywhere provides an installable software package for creating and operating Kubernetes clusters on-premises and automation tooling for cluster lifecycle operations. EKS Anywhere is certified Kubernetes conformant, so existing applications that run on upstream Kubernetes are compatible with EKS Anywhere.

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service that makes it easy for you to run Kubernetes on the AWS Cloud. Amazon EKS is certified Kubernetes conformant, so existing applications that run on upstream Kubernetes are compatible with Amazon EKS. To learn more about Amazon EKS, see Amazon Elastic Kubernetes Service .

Comparing Amazon EKS Anywhere to Amazon EKS

Feature Amazon EKS Anywhere Amazon EKS
Control plane
K8s control plane management Managed by customer Managed by AWS
K8s control plane location Customer-managed infrastructure AWS Cloud
Cluster updates Customer-managed updates for control plane and worker nodes AWS-managed in-place updates for control plane and AWS managed updates for worker nodes.
Compute
Compute options vSphere, bare metal, Snowball Edge, CloudStack, Nutanix Amazon EC2, AWS Fargate
Node operating systems Bottlerocket, Ubuntu, RHEL Amazon Linux 2, Windows Server, Bottlerocket, Ubuntu
Physical hardware (servers, network equipment, storage, etc.) Customer-managed AWS-managed
Serverless Not supported Amazon EKS on AWS Fargate
Management
Command line interface (CLI) eksctl CLI eksctl CLI, AWS CLI
AWS console view Optional with EKS Connector Native EKS console integration
Infrastructure-as-code Kubernetes API-compatible tooling, Terraform, GitOps, other 3rd-party solutions AWS CloudFormation, Terraform, GitOps, other 3rd-party solutions
Logging and monitoring CloudWatch, Prometheus, other 3rd-party solutions CloudWatch, Prometheus, other 3rd-party solutions
GitOps Flux controller Flux controller
Functions and tooling
Networking and Security Cilium CNI and network policy supported Amazon VPC CNI supported. Other compatible 3rd-party CNI plugins available.
Load balancer MetalLB Elastic Load Balancing including Application Load Balancer (ALB), and Network Load Balancer (NLB)
Service mesh Community or 3rd-party solutions AWS App Mesh, community, or 3rd-party solutions
Community tools and Helm Works with compatible community tooling and helm charts. Works with compatible community tooling and helm charts.
Pricing and support
Control plane pricing Free to download, paid Enterprise Subscription option Hourly pricing per cluster
AWS Support Additional annual subscription (per cluster) for AWS support Basic support included. Included in paid AWS support plans (developer, business, and enterprise)

Comparing Amazon EKS Anywhere to Amazon EKS on Outposts

Like EKS Anywhere, Amazon EKS on Outposts provides a means of running Kubernetes clusters using EKS software on-premises. The main differences are that:

  • Amazon provides the hardware with Outposts, while most EKS Anywhere providers leverage the customer’s own hardware.
  • With Amazon EKS on Outposts, the Kubernetes control plane is fully managed by AWS. With EKS Anywhere, customers are responsible for managing the lifecycle of the Kubernetes control plane with EKS Anywhere automation tooling.
  • Customers can use Amazon EKS on Outposts with the same console, APIs, and tools they use to run Amazon EKS clusters in AWS Cloud. With EKS Anywhere, customers can use the eksctl CLI to manage their clusters, optionally connect their clusters to the EKS console for observability, and optionally use infrastructure as code tools such as Terraform and GitOps to manage their clusters. However, the primary interfaces for EKS Anywhere are the EKS Anywhere Custom Resources. Amazon EKS does not have a CRD-based interface today.
  • Amazon EKS on Outposts is a regional AWS service that requires a consistent, reliable connection from the Outpost to the AWS Region. EKS Anywhere is a standalone software offering that can run entirely disconnected from AWS Cloud, including air-gapped environments.

Outposts have two deployment methods available:

  • Extended clusters: With extended clusters, the Kubernetes control plane runs in an AWS Region, while Kubernetes nodes run on Outpost hardware.

  • Local clusters: With local clusters, both the Kubernetes control plane and nodes run on Outpost hardware.

For more information, see Amazon EKS on AWS Outposts .

3.6 -

  • Standalone clusters: If you are only running a single EKS Anywhere cluster, you can deploy a standalone cluster. This deployment type runs the EKS Anywhere management components on the same cluster that runs workloads. Standalone clusters must be managed with the eksctl CLI. A standalone cluster is effectively a management cluster, but in this deployment type, only manages itself.

  • Management cluster with separate workload clusters: If you plan to deploy multiple EKS Anywhere clusters, it’s recommended to deploy a management cluster with separate workload clusters. With this deployment type, the EKS Anywhere management components are only run on the management cluster, and the management cluster can be used to perform cluster lifecycle operations on a fleet of workload clusters. The management cluster must be managed with the eksctl CLI, whereas workload clusters can be managed with the eksctl CLI or with Kubernetes API-compatible clients such as kubectl, GitOps, or Terraform.

4 - Installation

How to install the EKS Anywhere CLI, set up prerequisites, and create EKS Anywhere clusters

This section explains how to set up and run EKS Anywhere. The pages in this section are purposefully ordered, and we recommend stepping through the pages one-by-one until you are ready to choose an infrastructure provider for your EKS Anywhere cluster.

4.1 - Overview

Overview of the EKS Anywhere cluster creation process

Overview

Kubernetes clusters require infrastructure capacity for the Kubernetes control plane, etcd, and worker nodes. EKS Anywhere provisions and manages this capacity on your behalf when you create EKS Anywhere clusters by interacting with the underlying infrastructure interfaces. Today, EKS Anywhere supports vSphere, bare metal, Snow, Apache CloudStack and Nutanix infrastructure providers. EKS Anywhere can also run on Docker for dev/test and non-production deployments only.

If you are creating your first EKS Anywhere cluster, you must first prepare an Administrative machine (Admin machine) where you install and run the EKS Anywhere CLI. The EKS Anywhere CLI (eksctl anywhere) is the primary tool you will use to create and manage your first cluster.

Your interface for configuring EKS Anywhere clusters is the cluster specification yaml (cluster spec). This cluster spec is where you define cluster configuration including cluster name, network, Kubernetes version, control plane settings, worker node settings, and operating system. You also specify environment-specific configuration in the cluster spec for vSphere, bare metal, Snow, CloudStack, and Nutanix. When you perform cluster lifecycle operations, you modify the cluster spec, and then apply the cluster spec changes to your cluster in a declarative manner.

Before creating EKS Anywhere clusters, you must determine the operating system you will use. EKS Anywhere supports Bottlerocket, Ubuntu, and Red Hat Enterprise Linux (RHEL). All operating systems are not supported on each infrastructure provider. If you are using Ubuntu or RHEL, you must build your images before creating your cluster. For details reference the Operating System Management documenation

During initial cluster creation, the EKS Anywhere CLI performs the following high-level actions

  • Confirms the target cluster environment is available
  • Confirms authentication succeeds to the target environment
  • Performs infrastructure provider-specific validations
  • Creates a bootstrap cluster (Kind cluster) on the Admin machine
  • Installs Cluster API (CAPI) and EKS-A core components on the bootstrap cluster
  • Creates the EKS Anywhere cluster on the infrastructure provider
  • Moves the Cluster API and EKS-A core components from the bootstrap cluster to the EKS Anywhere cluster
  • Shuts down the bootstrap cluster

During initial cluster creation, you can observe the progress through the EKS Anywhere CLI output and by monitoring the CAPI and EKS-A controller manager logs on the bootstrap cluster. To access the bootstrap cluster, use the kubeconfig file in the <cluster-name>/generated/<cluster-name>.kind.kubeconfig file location.

After initial cluster creation, you can access your cluster using the kubeconfig file, which is located in the <cluster-name>/<cluster-name>-eks-a-cluster.kubeconfig file location. You can SSH to the nodes that EKS Anywhere created on your behalf with the keys in the <cluster-name>/eks-a-id_rsa location by default.

While you do not need to maintain your Admin machine, you must save your kubeconfig, SSH keys, and EKS Anywhere cluster spec to a safe location if you intend to use a different Admin machine in the future.

See the Admin machine page for details and requirements to get started setting up your Admin machine.

Infrastructure Providers

EKS Anywhere uses an infrastructure provider model for creating, upgrading, and managing Kubernetes clusters that is based on the Kubernetes Cluster API (CAPI) project.

Like CAPI, EKS Anywhere runs a Kind cluster on the Admin machine to act as a bootstrap cluster. However, instead of using CAPI directly with the clusterctl command to manage EKS Anywhere clusters, you use the eksctl anywhere command which simplifies that operation.

Before creating your first EKS Anywhere cluster, you must choose your infrastructure provider and ensure the requirements for that environment are met. Reference the infrastructure provider-specific sections below for more information.

Deployment Architectures

EKS Anywhere supports two deployment architectures:

  • Standalone clusters: If you are only running a single EKS Anywhere cluster, you can deploy a standalone cluster. This deployment type runs the EKS Anywhere management components on the same cluster that runs workloads. Standalone clusters must be managed with the eksctl CLI. A standalone cluster is effectively a management cluster, but in this deployment type, only manages itself.

  • Management cluster with separate workload clusters: If you plan to deploy multiple EKS Anywhere clusters, it’s recommended to deploy a management cluster with separate workload clusters. With this deployment type, the EKS Anywhere management components are only run on the management cluster, and the management cluster can be used to perform cluster lifecycle operations on a fleet of workload clusters. The management cluster must be managed with the eksctl CLI, whereas workload clusters can be managed with the eksctl CLI or with Kubernetes API-compatible clients such as kubectl, GitOps, or Terraform.

For details on the EKS Anywhere architectures, see the Architecture page.

EKS Anywhere software

When setting up your Admin machine, you need Internet access to the repositories hosting the EKS Anywhere software. EKS Anywhere software is divided into two types of components: The EKS Anywhere CLI for managing clusters and the cluster components and controllers used to run workloads and configure clusters.

  • Command line tools: Binaries installed on the Admin machine include eksctl, eksctl-anywhere, kubectl, and aws-iam-authenticator.
  • Cluster components and controllers: These components are listed on the artifacts page for each provider.

If you are operating behind a firewall that limits access to the Internet, you can configure EKS Anywhere to use a proxy service to connect to the Internet.

For more information on the software used in EKS Distro, which includes the Kubernetes release and related software in EKS Anywhere, see the EKS Distro Releases page.

4.2 - 1. Admin Machine

Steps for setting up the Admin Machine

EKS Anywhere will create and manage Kubernetes clusters on multiple providers. Currently we support creating development clusters locally using Docker and production clusters from providers listed on the providers page.

Creating an EKS Anywhere cluster begins with setting up an Administrative machine where you run all EKS Anywhere lifecycle operations as well as Docker, kubectl and prerequisite utilites. From here you will need to install eksctl , a CLI tool for creating and managing clusters on EKS, and the eksctl-anywhere plugin, an extension to create and manage EKS Anywhere clusters on-premises, on your Administrative machine. You can then proceed to the cluster networking and provider specific steps . See Create cluster workflow for an overview of the cluster creation process.

NOTE: For Snow provider, if you ordered a Snowball Edge device with EKS Anywhere enabled, it is preconfigured with an Admin AMI which contains the necessary binaries, dependencies, and artifacts to create an EKS Anywhere cluster. Skip to the steps on Create Snow production cluster to get started with EKS Anywhere on Snow.

Administrative machine prerequisites

System and network requirements

  • Mac OS 10.15+ / Ubuntu 20.04.2 LTS or 22.04 LTS / RHEL or Rocky Linux 8.8+
  • 4 CPU cores
  • 16GB memory
  • 30GB free disk space
  • If you are running in an airgapped environment, the Admin machine must be amd64.
  • If you are running EKS Anywhere on bare metal, the Admin machine must be on the same Layer 2 network as the cluster machines.

Here are a few other things to keep in mind:

  • If you are using Ubuntu, use the Docker CE installation instructions to install Docker and not the Snap installation, as described here.

  • If you are using EKS Anywhere v0.15 or earlier and Ubuntu 21.10 or 22.04, you will need to switch from cgroups v2 to cgroups v1. For details, see Troubleshooting Guide.

  • If you are using Docker Desktop, you need to know that:

    • For EKS Anywhere Bare Metal, Docker Desktop is not supported
    • For EKS Anywhere vSphere, if you are using EKS Anywhere v0.15 or earlier and Mac OS Docker Desktop 4.4.2 or newer "deprecatedCgroupv1": true must be set in ~/Library/Group\ Containers/group.com.docker/settings.json.

Tools

Install EKS Anywhere CLI tools

Via Homebrew (macOS and Linux)

You can install eksctl and eksctl-anywhere with homebrew . This package will also install kubectl and the aws-iam-authenticator which will be helpful to test EKS Anywhere clusters.

brew install aws/tap/eks-anywhere

Manually (macOS and Linux)

Install the latest release of eksctl. The EKS Anywhere plugin requires eksctl version 0.66.0 or newer.

curl "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \
    --silent --location \
    | tar xz -C /tmp
sudo install -m 0755 /tmp/eksctl /usr/local/bin/eksctl

Install the eksctl-anywhere plugin.

RELEASE_VERSION=$(curl https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml --silent --location | yq ".spec.latestVersion")
EKS_ANYWHERE_TARBALL_URL=$(curl https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml --silent --location | yq ".spec.releases[] | select(.version==\"$RELEASE_VERSION\").eksABinary.$(uname -s | tr A-Z a-z).uri")
curl $EKS_ANYWHERE_TARBALL_URL \
    --silent --location \
    | tar xz ./eksctl-anywhere
sudo install -m 0755 ./eksctl-anywhere /usr/local/bin/eksctl-anywhere

Install the kubectl Kubernetes command line tool. This can be done by following the instructions here .

Or you can install the latest kubectl directly with the following.

export OS="$(uname -s | tr A-Z a-z)" ARCH=$(test "$(uname -m)" = 'x86_64' && echo 'amd64' || echo 'arm64')
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/${OS}/${ARCH}/kubectl"
sudo install -m 0755 ./kubectl /usr/local/bin/kubectl

Upgrade eksctl-anywhere

If you installed eksctl-anywhere via homebrew you can upgrade the binary with

brew update
brew upgrade aws/tap/eks-anywhere

If you installed eksctl-anywhere manually you should follow the installation steps to download the latest release.

You can verify your installed version with

eksctl anywhere version

Prepare for airgapped deployments (optional)

For more information on how to prepare the Administrative machine for airgapped environments, go to the Airgapped page.

Deploy a cluster

Once you have the tools installed, go to the EKS Anywhere providers page for instructions on creating a cluster on your chosen provider.

4.3 - 2. Airgapped (optional)

Configure EKS Anywhere for airgapped environments

EKS Anywhere can be used in airgapped environments, where clusters are not connected to the internet or external networks. The following diagrams illustrate how to set up for cluster creation in an airgapped environment:

Download EKS Anywhere artifacts to Admin machine

If you are planning to run EKS Anywhere in an airgapped environments, before you create a cluster, you must temporarily connect your Admin machine to the internet to install the eksctl CLI and pull the required EKS Anywhere dependencies.

Disconnect Admin machine from Internet to create cluster

Once these dependencies are downloaded and imported in a local registry, you no longer need internet access. In the EKS Anywhere cluster specification, you can configure EKS Anywhere to use your local registry mirror. When the registry mirror configuration is set in the EKS Anywhere cluster specification, EKS Anywhere configures containerd to pull from that registry instead of Amazon ECR during cluster creation and lifecycle operations. For more information, reference the Registry Mirror Configuration documentation.

If you are using Ubuntu or RHEL as the operating system for nodes in your EKS Anywhere cluster, you must connect to the internet while building the images with the EKS Anywhere image-builder tool. After building the operating system images, you can configure EKS Anywhere to pull the operating system images from a location of your chosing in the EKS Anywhere cluster specification. For more information on the image building process and operating system cluster specification, reference the Operating System Management documentation.

Overview

The process for preparing your airgapped environment for EKS Anywhere is summarized by the following steps:

  1. Use the eksctl anywhere CLI to download EKS Anywhere artifacts. These artifacts are yaml files that contain the list and locations of the EKS Anywhere dependencies.
  2. Use the eksctl anywhere CLI to download EKS Anywhere images. These images include EKS Anywhere dependencies including EKS Distro components, Cluster API provider components, and EKS Anywhere components such as the EKS Anywhere controllers, Cilium CNI, kube-vip, and cert-manager.
  3. Set up your local registry following the steps in the Registry Mirror Configuration documentation.
  4. Use the eksctl anywhere CLI to import the EKS Anywhere images to your local registry.
  5. Optionally use the eksctl anywhere CLI to copy EKS Anywhere Curated Packages images to your local registry.

Prerequisites

  • An existing Admin machine
  • Docker running on the Admin machine
  • At least 80GB in storage space on the Admin machine to temporarily store the EKS Anywhere images locally before importing them to your local registry. Currently, when downloading images, EKS Anywhere pulls all dependencies for all infrastructure providers and supported Kubernetes versions.
  • The download and import images commands must be run on an amd64 machine to import amd64 images to the registry mirror.

Procedure

  1. Download the EKS Anywhere artifacts that contain the list and locations of the EKS Anywhere dependencies. A compressed file eks-anywhere-downloads.tar.gz will be downloaded. You can use the eksctl anywhere download artifacts --dry-run command to see the list of artifacts it will download.

    eksctl anywhere download artifacts
    
  2. Decompress the eks-anywhere-downloads.tar.gz file using the following command. This will create an eks-anywhere-downloads folder.

    tar -xvf eks-anywhere-downloads.tar.gz
    
  3. Download the EKS Anywhere image dependencies to the Admin machine. This command may take several minutes (10+) to complete. To monitor the progress of the command, you can run with the -v 6 command line argument, which will show details of the images that are being pulled. Docker must be running for the following command to succeed.

    eksctl anywhere download images -o images.tar
    
  4. Set up a local registry mirror to host the downloaded EKS Anywhere images and configure your Admin machine with the certificates and authentication information if your registry requires it. For details, refer to the Registry Mirror Configuration documentation.

  5. Import images to the local registry mirror using the following command. Set REGISTRY_MIRROR_URL to the url of the local registry mirror you created in the previous step. This command may take several minutes to complete. To monitor the progress of the command, you can run with the -v 6 command line argument. When using self-signed certificates for your registry, you should run with the --insecure command line argument to indicate skipping TLS verification while pushing helm charts and bundles.

    export REGISTRY_MIRROR_URL=<registryurl>
    
    eksctl anywhere import images -i images.tar -r ${REGISTRY_MIRROR_URL} \
       --bundles ./eks-anywhere-downloads/bundle-release.yaml
    
  6. Optionally import curated packages to your registry mirror. The curated packages images are copied from Amazon ECR to your local registry mirror in a single step, as opposed to separate download and import steps. For post-cluster creation steps, reference the Curated Packages documentation.

    Expand for curated packages instructions

    If your EKS Anywhere cluster is running in an airgapped environment, and you set up a local registry mirror, you can copy curated packages from Amazon ECR to your local registry mirror with the following command.

    Set $KUBEVERSION to be equal to the spec.kubernetesVersion of your EKS Anywhere cluster specification.

    The copy packages command uses the credentials in your docker config file. So you must docker login to the source registries and the destination registry before running the command.

    When using self-signed certificates for your registry, you should run with the --dst-insecure command line argument to indicate skipping TLS verification while copying curated packages.

    eksctl anywhere copy packages \
      ${REGISTRY_MIRROR_URL}/curated-packages \
      --kube-version $KUBEVERSION \
      --src-chart-registry public.ecr.aws/eks-anywhere \
      --src-image-registry 783794618700.dkr.ecr.us-west-2.amazonaws.com
    

If the previous steps succeeded, all of the required EKS Anywhere dependencies are now present in your local registry. Before you create your EKS Anywhere cluster, configure registryMirrorConfiguration in your EKS Anywhere cluster specification with the information for your local registry. For details see the Registry Mirror Configuration documentation.

NOTE: If you are running EKS Anywhere on bare metal, you must configure osImageURL and hookImagesURLPath in your EKS Anywhere cluster specification with the location of your node operating system image and the hook OS image. For details, reference the bare metal configuration documentation.

Next Steps

4.3.1 -

  1. Download the EKS Anywhere artifacts that contain the list and locations of the EKS Anywhere dependencies. A compressed file eks-anywhere-downloads.tar.gz will be downloaded. You can use the eksctl anywhere download artifacts --dry-run command to see the list of artifacts it will download.

    eksctl anywhere download artifacts
    
  2. Decompress the eks-anywhere-downloads.tar.gz file using the following command. This will create an eks-anywhere-downloads folder.

    tar -xvf eks-anywhere-downloads.tar.gz
    
  3. Download the EKS Anywhere image dependencies to the Admin machine. This command may take several minutes (10+) to complete. To monitor the progress of the command, you can run with the -v 6 command line argument, which will show details of the images that are being pulled. Docker must be running for the following command to succeed.

    eksctl anywhere download images -o images.tar
    
  4. Set up a local registry mirror to host the downloaded EKS Anywhere images and configure your Admin machine with the certificates and authentication information if your registry requires it. For details, refer to the Registry Mirror Configuration documentation.

  5. Import images to the local registry mirror using the following command. Set REGISTRY_MIRROR_URL to the url of the local registry mirror you created in the previous step. This command may take several minutes to complete. To monitor the progress of the command, you can run with the -v 6 command line argument. When using self-signed certificates for your registry, you should run with the --insecure command line argument to indicate skipping TLS verification while pushing helm charts and bundles.

    export REGISTRY_MIRROR_URL=<registryurl>
    
    eksctl anywhere import images -i images.tar -r ${REGISTRY_MIRROR_URL} \
       --bundles ./eks-anywhere-downloads/bundle-release.yaml
    
  6. Optionally import curated packages to your registry mirror. The curated packages images are copied from Amazon ECR to your local registry mirror in a single step, as opposed to separate download and import steps. For post-cluster creation steps, reference the Curated Packages documentation.

    Expand for curated packages instructions

    If your EKS Anywhere cluster is running in an airgapped environment, and you set up a local registry mirror, you can copy curated packages from Amazon ECR to your local registry mirror with the following command.

    Set $KUBEVERSION to be equal to the spec.kubernetesVersion of your EKS Anywhere cluster specification.

    The copy packages command uses the credentials in your docker config file. So you must docker login to the source registries and the destination registry before running the command.

    When using self-signed certificates for your registry, you should run with the --dst-insecure command line argument to indicate skipping TLS verification while copying curated packages.

    eksctl anywhere copy packages \
      ${REGISTRY_MIRROR_URL}/curated-packages \
      --kube-version $KUBEVERSION \
      --src-chart-registry public.ecr.aws/eks-anywhere \
      --src-image-registry 783794618700.dkr.ecr.us-west-2.amazonaws.com
    

4.4 - 3. Cluster Networking

EKS Anywhere cluster networking

Cluster Networking

EKS Anywhere clusters use the clusterNetwork field in the cluster spec to allocate pod and service IPs. Once the cluster is created, the pods.cidrBlocks, services.cidrBlocks and nodes.cidrMaskSize fields are immutable. As a result, extra care should be taken to ensure that there are sufficient IPs and IP blocks available when provisioning large clusters.

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12

The cluster pods.cidrBlocks is subdivided between nodes with a default block of size /24 per node, which can also be configured via the nodes.cidrMaskSize field. This node CIDR block is then used to assign pod IPs on the node.

Ports and Protocols

EKS Anywhere requires that various ports on control plane and worker nodes be open. Some Kubernetes-specific ports need open access only from other Kubernetes nodes, while others are exposed externally. Beyond Kubernetes ports, someone managing an EKS Anywhere cluster must also have external access to ports on the underlying EKS Anywhere provider (such as VMware) and to external tooling (such as Jenkins).

If you are responsible for network firewall rules between nodes on your EKS Anywhere clusters, the following tables describe both Kubernetes and EKS Anywhere-specific ports you should be aware of.

Kubernetes control plane

The following table represents the ports published by the Kubernetes project that must be accessible on any Kubernetes control plane.

Protocol Direction Port Range Purpose Used By
TCP Inbound 6443 Kubernetes API server All
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10259 kube-scheduler Self
TCP Inbound 10257 kube-controller-manager Self

Although etcd ports are included in control plane section, you can also host your own etcd cluster externally or on custom ports.

Protocol Direction Port Range Purpose Used By
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd

Use the following to access the SSH service on the control plane and etcd nodes:

Protocol Direction Port Range Purpose Used By
TCP Inbound 22 SSHD server SSH clients
Kubernetes worker nodes

The following table represents the ports published by the Kubernetes project that must be accessible from worker nodes.

Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services All

The API server port that is sometimes switched to 443. Alternatively, the default port is kept as is and API server is put behind a load balancer that listens on 443 and routes the requests to API server on the default port.

Use the following to access the SSH service on the worker nodes:

Protocol Direction Port Range Purpose Used By
TCP Inbound 22 SSHD server SSH clients
Bare Metal provider

On the Admin machine for a Bare Metal provider, the following ports need to be accessible to all the nodes in the cluster, from the same level 2 network, for initially network booting:

Protocol Direction Port Range Purpose Used By
UDP Inbound 67 Boots DHCP All nodes, for network boot
UDP Inbound 69 Boots TFTP All nodes, for network boot
UDP Inbound 514 Boots Syslog All nodes, for provisioning logs
TCP Inbound 80 Boots HTTP All nodes, for network boot
TCP Inbound 42113 Tink-server gRPC All nodes, talk to Tinkerbell
TCP Inbound 50061 Hegel HTTP All nodes, talk to Tinkerbell
TCP Outbound 623 Rufio IPMI All nodes, out-of-band power and next boot (optional )
TCP Outbound 80,443 Rufio Redfish All nodes, out-of-band power and next boot (optional )
VMware provider

The following table displays ports that need to be accessible from the VMware provider running EKS Anywhere:

Protocol Direction Port Range Purpose Used By
TCP Inbound 443 vCenter Server vCenter API endpoint
TCP Inbound 6443 Kubernetes API server Kubernetes API endpoint
TCP Inbound 2379 Manager Etcd API endpoint
TCP Inbound 2380 Manager Etcd API endpoint
Nutanix provider

The following table displays ports that need to be accessible from the Nutanix provider running EKS Anywhere:

Protocol Direction Port Range Purpose Used By
TCP Inbound 9440 Prism Central Server Prism Central API endpoint
TCP Inbound 6443 Kubernetes API server Kubernetes API endpoint
TCP Inbound 2379 Manager Etcd API endpoint
TCP Inbound 2380 Manager Etcd API endpoint
Snow provider

In addition to the Ports Required to Use AWS Services on an AWS Snowball Edge Device , the following table displays ports that need to be accessible from the Snow provider running EKS Anywhere:

Protocol Direction Port Range Purpose Used By
TCP Inbound 9092 Device Controller EKS Anywhere and CAPAS controller
TCP Inbound 8242 EC2 HTTPS endpoint EKS Anywhere and CAPAS controller
TCP Inbound 6443 Kubernetes API server Kubernetes API endpoint
TCP Inbound 2379 Manager Etcd API endpoint
TCP Inbound 2380 Manager Etcd API endpoint
Control plane management tools

A variety of control plane management tools are available to use with EKS Anywhere. One example is Jenkins.

Protocol Direction Port Range Purpose Used By
TCP Inbound 8080 Jenkins Server HTTP Jenkins endpoint
TCP Inbound 8443 Jenkins Server HTTPS Jenkins endpoint

4.5 - 4. Choose provider

Choose an infrastructure provider for EKS Anywhere clusters

EKS Anywhere supports many different types of infrastructure including VMWare vSphere, bare metal, Snow, Nutanix, and Apache CloudStack. You can also run EKS Anywhere on Docker for dev/test use cases only. EKS Anywhere clusters can only run on a single infrastructure provider. For example, you cannot have some vSphere nodes, some bare metal nodes, and some Snow nodes in a single EKS Anywhere cluster. Management clusters also must run on the same infrastructure provider as workload clusters.

Detailed information on each infrastructure provider can be found in the sections below. Review the infrastructure provider’s prerequisites in-depth before creating your first cluster.

Install on vSphere
Install on Bare Metal
Install on Snow
Install on CloudStack
Install on Nutanix
Install on Docker (dev only)

4.6 - Create vSphere cluster

Create an EKS Anywhere cluster on vSphere

4.6.1 - Overview

Overview of EKS Anywhere cluster creation on vSphere

Creating a vSphere cluster

The following diagram illustrates what happens when you start the cluster creation process.

Start creating a vSphere cluster

Start creating EKS Anywhere cluster

1. Generate a config file for vSphere

To this command, you identify the name of the provider (-p vsphere) and a cluster name and redirect the output to a file. The result is a config file template that you need to modify for the specific instance of your provider.

2. Modify the config file

Using the generated cluster config file, make modifications to suit your situation. Details about this config file are contained on the vSphere Config page.

3. Launch the cluster creation

Once you have modified the cluster configuration file, use eksctl anywhere create cluster -f $CLUSTER_NAME.yaml starts the cluster creation process. To see details on the cluster creation process, increase verbosity (-v=9 provides maximum verbosity).

4. Authenticate and create bootstrap cluster

After authenticating to vSphere and validating the assets there, the cluster creation process starts off creating a temporary Kubernetes bootstrap cluster on the Administrative machine. To begin, the cluster creation process runs a series of govc commands to check on the vSphere environment:

  • Checks that the vSphere environment is available.

  • Using the URL and credentials provided in the cluster spec files, authenticates to the vSphere provider.

  • Validates that the datacenter and the datacenter network exists.

  • Validates that the identified datastore (to store your EKS Anywhere cluster) exists, that the folder holding your EKS Anywhere cluster VMs exists, and that the resource pools containing compute resources exist. If you have multiple VSphereMachineConfig objects in your config file, you will see these validations repeated.

  • Validates the virtual machine templates to be used for the control plane and worker nodes (such as ubuntu-2004-kube-v1.20.7).

If all validations pass, you will see this message:

✅ Vsphere Provider setup is valid

Next, the process runs the kind command to build a single-node Kubernetes bootstrap cluster on the Administrative machine. This includes pulling the kind node image, preparing the node, writing the configuration, starting the control-plane, and installing CNI. You will see:

After this point the bootstrap cluster is installed, but not yet fully configured.

Continuing cluster creation

The following diagram illustrates the activities that occur next:

Continue creating EKS Anywhere cluster

1. Add CAPI management

Cluster API (CAPI) management is added to the bootstrap cluster to direct the creation of the target cluster.

2. Set up cluster

Configure the control plane and worker nodes.

3. Add Cilium networking

Add Cilium as the CNI plugin to use for networking between the cluster services and pods.

4. Add CAPI to target cluster

Add the CAPI service to the target cluster in preparation for it to take over management of the cluster after the cluster creation is completed and the bootstrap cluster is deleted. The bootstrap cluster can then begin moving the CAPI objects over to the target cluster, so it can take over the management of itself.

With the bootstrap cluster running and configured on the Administrative machine, the creation of the target cluster begins. It uses kubectl to apply a target cluster configuration as follows:

  • Once etcd, the control plane, and the worker nodes are ready, it applies the networking configuration to the target cluster.

  • CAPI providers are configured on the target cluster, in preparation for the target cluster to take over responsibilities for running the components needed to manage itself.

  • With CAPI running on the target cluster, CAPI objects for the target cluster are moved from the bootstrap cluster to the target cluster’s CAPI service (done internally with the clusterctl command).

  • Add Kubernetes CRDs and other addons that are specific to EKS Anywhere.

  • The cluster configuration is saved.

Once etcd, the control plane, and the worker nodes are ready, it applies the networking configuration to the workload cluster:

Installing networking on workload cluster

After that, the CAPI providers are configured on the workload cluster, in preparation for the workload cluster to take over responsibilities for running the components needed to manage itself:

Installing cluster-api providers on workload cluster

With CAPI running on the workload cluster, CAPI objects for the workload cluster are moved from the bootstrap cluster to the workload cluster’s CAPI service (done internally with the clusterctl command):

Moving cluster management from bootstrap to workload cluster

At this point, the cluster creation process will add Kubernetes CRDs and other addons that are specific to EKS Anywhere. That configuration is applied directly to the cluster:

Installing EKS-A custom components (CRD and controller) on workload cluster
Creating EKS-A CRDs instances on workload cluster
Installing GitOps Toolkit on workload cluster

If you did not specify GitOps support, starting the flux service is skipped:

GitOps field not specified, bootstrap flux skipped

The cluster configuration is saved:

Writing cluster config file

With the cluster up, and the CAPI service running on the new cluster, the bootstrap cluster is no longer needed and is deleted:

Delete EKS Anywhere bootstrap cluster

At this point, cluster creation is complete. You can now use your target cluster as either:

  • A standalone cluster (to run workloads) or
  • A management cluster (to optionally create one or more workload clusters)

Creating workload clusters (optional)

As described in Create separate workload clusters , you can use the cluster you just created as a management cluster to create and manage one or more workload clusters on the same vSphere provider as follows:

  • Use eksctl to generate a cluster config file for the new workload cluster.
  • Modify the cluster config with a new cluster name and different vSphere resources.
  • Use eksctl to create the new workload cluster from the new cluster config file and credentials from the initial management cluster.

4.6.2 - Requirements for EKS Anywhere on VMware vSphere

VMware vSphere provider requirements for EKS Anywhere

To run EKS Anywhere, you will need:

Prepare Administrative machine

Set up an Administrative machine as described in Install EKS Anywhere .

Prepare a VMware vSphere environment

To prepare a VMware vSphere environment to run EKS Anywhere, you need the following:

  • A vSphere 7 or 8 environment running vCenter.

  • Capacity to deploy 6-10 VMs.

  • DHCP service running in vSphere environment in the primary VM network for your workload cluster.

  • One network in vSphere to use for the cluster. EKS Anywhere clusters need access to vCenter through the network to enable self-managing and storage capabilities.

  • An OVA imported into vSphere and converted into a template for the workload VMs

  • It’s critical that you set up your vSphere user credentials properly.

  • One IP address routable from cluster but excluded from DHCP offering. This IP address is to be used as the Control Plane Endpoint IP.

    Below are some suggestions to ensure that this IP address is never handed out by your DHCP server.

    You may need to contact your network engineer.

    • Pick an IP address reachable from cluster subnet which is excluded from DHCP range OR
    • Alter DHCP ranges to leave out an IP address(s) at the top and/or the bottom of the range OR
    • Create an IP reservation for this IP on your DHCP server. This is usually accomplished by adding a dummy mapping of this IP address to a non-existent mac address.

Each VM will require:

  • 2 vCPUs
  • 8GB RAM
  • 25GB Disk

The administrative machine and the target workload environment will need network access (TCP/443) to:

  • vCenter endpoint (must be accessible to EKS Anywhere clusters)
  • public.ecr.aws
  • anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere binaries, manifests and OVAs)
  • distro.eks.amazonaws.com (to download EKS Distro binaries and manifests)
  • d2glxqk2uabbnd.cloudfront.net (for EKS Anywhere and EKS Distro ECR container images)
  • api.ecr.us-west-2.amazonaws.com (for EKS Anywhere package authentication matching your region)
  • d5l0dvt14r5h8.cloudfront.net (for EKS Anywhere package ECR container images)
  • api.github.com (only if GitOps is enabled)

vSphere information needed before creating the cluster

You need to get the following information before creating the cluster:

  • Static IP Addresses: You will need one IP address for the management cluster control plane endpoint, and a separate IP address for the control plane of each workload cluster you add.

    Let’s say you are going to have the management cluster and two workload clusters. For those, you would need three IP addresses, one for each cluster. All of those addresses will be configured the same way in the configuration file you will generate for each cluster.

    A static IP address will be used for each control plane VM in your EKS Anywhere cluster. Choose IP addresses in your network range that do not conflict with other VMs and make sure they are excluded from your DHCP offering.

    An IP address will be the value of the property controlPlaneConfiguration.endpoint.host in the config file of the management cluster. A separate IP address must be assigned for each workload cluster.

    Import ova wizard

  • vSphere Datacenter Name: The vSphere datacenter to deploy the EKS Anywhere cluster on.

    Import ova wizard

  • VM Network Name: The VM network to deploy your EKS Anywhere cluster on.

    Import ova wizard

  • vCenter Server Domain Name: The vCenter server fully qualified domain name or IP address. If the server IP is used, the thumbprint must be set or insecure must be set to true.

    Import ova wizard

  • thumbprint (required if insecure=false): The SHA1 thumbprint of the vCenter server certificate which is only required if you have a self-signed certificate for your vSphere endpoint.

    There are several ways to obtain your vCenter thumbprint. If you have govc installed, you can run the following command in the Administrative machine terminal, and take a note of the output:

    govc about.cert -thumbprint -k
    
  • template: The VM template to use for your EKS Anywhere cluster. This template was created when you imported the OVA file into vSphere.

    Import ova wizard

  • datastore: The vSphere datastore to deploy your EKS Anywhere cluster on.

    Import ova wizard

  • folder: The folder parameter in VSphereMachineConfig allows you to organize the VMs of an EKS Anywhere cluster. With this, each cluster can be organized as a folder in vSphere. You will have a separate folder for the management cluster and each cluster you are adding.

    Import ova wizard

  • resourcePool: The vSphere resource pools for your VMs in the EKS Anywhere cluster. If there is a resource pool: /<datacenter>/host/<resource-pool-name>/Resources

    Import ova wizard

4.6.3 - Preparing vSphere for EKS Anywhere

Set up a vSphere provider to prepare it for EKS Anywhere

Certain resources must be in place with appropriate user permissions to create an EKS Anywhere cluster using the vSphere provider.

Configuring Folder Resources

Create a VM folder:

For each user that needs to create workload clusters, have the vSphere administrator create a VM folder. That folder will host:

  • A nested folder for the management cluster and another one for each workload cluster.
  • Each cluster VM in its own nested folder under this folder.
vm/
├── YourVMFolder/
    ├── mgmt-cluster <------ Folder with vms for management cluster
        ├── mgmt-cluster-7c2sp
        ├── mgmt-cluster-etcd-2pbhp
        ├── mgmt-cluster-md-0-5c5844bcd8xpjcln-9j7xh
    ├── worload-cluster-0 <------ Folder with vms for workload cluster 0
        ├── workload-cluster-0-8dk3j
        ├── workload-cluster-0-etcd-20ksa
        ├── workload-cluster-0-md-0-6d964979ccxbkchk-c4qjf
    ├── worload-cluster-1 <------ Folder with vms for workload cluster 1
        ├── workload-cluster-1-59cbn
        ├── workload-cluster-1-etcd-qs6wv
        ├── workload-cluster-1-md-0-756bcc99c9-9j7xh

To see how to create folders on vSphere, see the vSphere Create a Folder documentation.

Configuring vSphere User, Group, and Roles

You need a vSphere user with the right privileges to let you create EKS Anywhere clusters on top of your vSphere cluster.

Configure via EKSA CLI

To configure a new user via CLI, you will need two things:

  • a set of vSphere admin credentials with the ability to create users and groups. If you do not have the rights to create new groups and users, you can invoke govc commands directly as outlined here.
  • a user.yaml file:
apiVersion: "eks-anywhere.amazon.com/v1"
kind: vSphereUser
spec:
  username: "eksa"                # optional, default eksa
  group: "MyExistingGroup"        # optional, default EKSAUsers
  globalRole: "MyGlobalRole"      # optional, default EKSAGlobalRole
  userRole: "MyUserRole"          # optional, default EKSAUserRole
  adminRole: "MyEKSAAdminRole"    # optional, default EKSACloudAdminRole
  datacenter: "MyDatacenter"
  vSphereDomain: "vsphere.local"  # this should be the domain used when you login, e.g. YourUsername@vsphere.local
  connection:
    server: "https://my-vsphere.internal.acme.com"
    insecure: false
  objects:
    networks:
      - !!str "/MyDatacenter/network/My Network"
    datastores:
      - !!str "/MyDatacenter/datastore/MyDatastore2"
    resourcePools:
      - !!str "/MyDatacenter/host/Cluster-03/MyResourcePool" # NOTE: see below if you do not want to use a resource pool
    folders:
      - !!str "/MyDatacenter/vm/OrgDirectory/MyVMs"
    templates:
      - !!str "/MyDatacenter/vm/Templates/MyTemplates"

NOTE: If you do not want to create a resource pool, you can instead specify the cluster directly as /MyDatacenter/host/Cluster-03 in user.yaml, where Cluster-03 is your cluster name. In your cluster spec, you will need to specify /MyDatacenter/host/Cluster-03/Resources for the resourcePool field.

Set the admin credentials as environment variables:

export EKSA_VSPHERE_USERNAME=<ADMIN_VSPHERE_USERNAME>
export EKSA_VSPHERE_PASSWORD=<ADMIN_VSPHERE_PASSWORD>

If the user does not already exist, you can create the user and all the specified group and role objects by running:

eksctl anywhere exp vsphere setup user -f user.yaml --password '<NewUserPassword>'

If the user or any of the group or role objects already exist, use the force flag instead to overwrite Group-Role-Object mappings for the group, roles, and objects specified in the user.yaml config file:

eksctl anywhere exp vsphere setup user -f user.yaml --force

Please note that there is one more manual step to configure global permissions here .

Configure via govc

If you do not have the rights to create a new user, you can still configure the necessary roles and permissions using the govc cli .

#! /bin/bash
# govc calls to configure a user with minimal permissions
set -x
set -e

EKSA_USER='<Username>@<UserDomain>'
USER_ROLE='EKSAUserRole'
GLOBAL_ROLE='EKSAGlobalRole'
ADMIN_ROLE='EKSACloudAdminRole'

FOLDER_VM='/YourDatacenter/vm/YourVMFolder'
FOLDER_TEMPLATES='/YourDatacenter/vm/Templates'

NETWORK='/YourDatacenter/network/YourNetwork'
DATASTORE='/YourDatacenter/datastore/YourDatastore'
RESOURCE_POOL='/YourDatacenter/host/Cluster-01/Resources/YourResourcePool'

govc role.create "$GLOBAL_ROLE" $(curl https://raw.githubusercontent.com/aws/eks-anywhere/main/pkg/config/static/globalPrivs.json | jq .[] | tr '\n' ' ' | tr -d '"')

govc role.create "$USER_ROLE" $(curl https://raw.githubusercontent.com/aws/eks-anywhere/main/pkg/config/static/eksUserPrivs.json | jq .[] | tr '\n' ' ' | tr -d '"')

govc role.create "$ADMIN_ROLE" $(curl https://raw.githubusercontent.com/aws/eks-anywhere/main/pkg/config/static/adminPrivs.json | jq .[] | tr '\n' ' ' | tr -d '"')

govc permissions.set -group=false -principal "$EKSA_USER"  -role "$GLOBAL_ROLE" /

govc permissions.set -group=false -principal "$EKSA_USER"  -role "$ADMIN_ROLE" "$FOLDER_VM"

govc permissions.set -group=false -principal "$EKSA_USER"  -role "$ADMIN_ROLE" "$FOLDER_TEMPLATES"

govc permissions.set -group=false -principal "$EKSA_USER"  -role "$USER_ROLE" "$NETWORK"

govc permissions.set -group=false -principal "$EKSA_USER"  -role "$USER_ROLE" "$DATASTORE"

govc permissions.set -group=false -principal "$EKSA_USER"  -role "$USER_ROLE" "$RESOURCE_POOL"

NOTE: If you do not want to create a resource pool, you can instead specify the cluster directly as /MyDatacenter/host/Cluster-03 in user.yaml, where Cluster-03 is your cluster name. In your cluster spec, you will need to specify /MyDatacenter/host/Cluster-03/Resources for the resourcePool field.

Please note that there is one more manual step to configure global permissions here .

Configure via UI

Add a vCenter User

Ask your VSphere administrator to add a vCenter user that will be used for the provisioning of the EKS Anywhere cluster in VMware vSphere.

  1. Log in with the vSphere Client to the vCenter Server.
  2. Specify the user name and password for a member of the vCenter Single Sign-On Administrators group.
  3. Navigate to the vCenter Single Sign-On user configuration UI.
    • From the Home menu, select Administration.
    • Under Single Sign On, click Users and Groups.
  4. If vsphere.local is not the currently selected domain, select it from the drop-down menu. You cannot add users to other domains.
  5. On the Users tab, click Add.
  6. Enter a user name and password for the new user. The maximum number of characters allowed for the user name is 300. You cannot change the user name after you create a user. The password must meet the password policy requirements for the system.
  7. Click Add.

For more details, see vSphere Add vCenter Single Sign-On Users documentation.

Create and define user roles

When you add a user for creating clusters, that user initially has no privileges to perform management operations. So you have to add this user to groups with the required permissions, or assign a role or roles with the required permission to this user.

Three roles are needed to be able to create the EKS Anywhere cluster:

  1. Create a global custom role: For example, you could name this EKS Anywhere Global. Define it for the user on the vCenter domain level and its children objects. Create this role with the following privileges:

    > Content Library
    * Add library item
    * Check in a template
    * Check out a template
    * Create local library
    * Update files
    > vSphere Tagging
    * Assign or Unassign vSphere Tag
    * Assign or Unassign vSphere Tag on Object
    * Create vSphere Tag
    * Create vSphere Tag Category
    * Delete vSphere Tag
    * Delete vSphere Tag Category
    * Edit vSphere Tag
    * Edit vSphere Tag Category
    * Modify UsedBy Field For Category
    * Modify UsedBy Field For Tag
    > Sessions
    * Validate session
    
  2. Create a user custom role: The second role is also a custom role that you could call, for example, EKSAUserRole. Define this role with the following objects and children objects.

    • The pool resource level and its children objects. This resource pool that our EKS Anywhere VMs will be part of.
    • The storage object level and its children objects. This storage that will be used to store the cluster VMs.
    • The network VLAN object level and its children objects. This network that will host the cluster VMs.
    • The VM and Template folder level and its children objects.

    Create this role with the following privileges:

    > Content Library
    * Add library item
    * Check in a template
    * Check out a template
    * Create local library
    > Datastore
    * Allocate space
    * Browse datastore
    * Low level file operations
    > Folder
    * Create folder
    > vSphere Tagging
    * Assign or Unassign vSphere Tag
    * Assign or Unassign vSphere Tag on Object
    * Create vSphere Tag
    * Create vSphere Tag Category
    * Delete vSphere Tag
    * Delete vSphere Tag Category
    * Edit vSphere Tag
    * Edit vSphere Tag Category
    * Modify UsedBy Field For Category
    * Modify UsedBy Field For Tag
    > Network
    * Assign network
    > Resource
    * Assign virtual machine to resource pool
    > Scheduled task
    * Create tasks
    * Modify task
    * Remove task
    * Run task
    > Profile-driven storage
    * Profile-driven storage view
    > Storage views
    * View
    > vApp
    * Import
    > Virtual machine
    * Change Configuration
      - Add existing disk
      - Add new disk
      - Add or remove device
      - Advanced configuration
      - Change CPU count
      - Change Memory
      - Change Settings
      - Configure Raw device
      - Extend virtual disk
      - Modify device settings
      - Remove disk
    * Edit Inventory
      - Create from existing
      - Create new
      - Remove
    * Interaction
      - Power off
      - Power on
    * Provisioning
      - Clone template
      - Clone virtual machine
      - Create template from virtual machine
      - Customize guest
      - Deploy template
      - Mark as template
      - Read customization specifications
    * Snapshot management
      - Create snapshot
      - Remove snapshot
      - Revert to snapshot
    
  3. Create a default Administrator role: The third role is the default system role Administrator that you define to the user on the folder level and its children objects (VMs and OVA templates) that was created by the VSphere admistrator for you.

    To create a role and define privileges check Create a vCenter Server Custom Role and Defined Privileges pages.

Manually set Global Permissions role in Global Permissions UI

vSphere does not currently support a public API for setting global permissions. Because of this, you will need to manually assign the Global Role you created to your user or group in the Global Permissions UI.

Make sure to select the Propagate to children box so the permissions get propagated down properly.

Global Permissions Screenshot

Deploy an OVA Template

If the user creating the cluster has permission and network access to create and tag a template, you can skip these steps because EKS Anywhere will automatically download the OVA and create the template if it can. If the user does not have the permissions or network access to create and tag the template, follow this guide. The OVA contains the operating system (Ubuntu, Bottlerocket, or RHEL) for a specific EKS Distro Kubernetes release and EKS Anywhere version. The following example uses Ubuntu as the operating system, but a similar workflow would work for Bottlerocket or RHEL.

Steps to deploy the OVA

  1. Go to the artifacts page and download or build the OVA template with the newest EKS Distro Kubernetes release to your computer.
  2. Log in to the vCenter Server.
  3. Right-click the folder you created above and select Deploy OVF Template. The Deploy OVF Template wizard opens.
  4. On the Select an OVF template page, select the Local file option, specify the location of the OVA template you downloaded to your computer, and click Next.
  5. On the Select a name and folder page, enter a unique name for the virtual machine or leave the default generated name, if you do not have other templates with the same name within your vCenter Server virtual machine folder. The default deployment location for the virtual machine is the inventory object where you started the wizard, which is the folder you created above. Click Next.
  6. On the Select a compute resource page, select the resource pool where to run the deployed VM template, and click Next.
  7. On the Review details page, verify the OVF or OVA template details and click Next.
  8. On the Select storage page, select a datastore to store the deployed OVF or OVA template and click Next.
  9. On the Select networks page, select a source network and map it to a destination network. Click Next.
  10. On the Ready to complete page, review the page and Click Finish. For details, see Deploy an OVF or OVA Template

To build your own Ubuntu OVA template check the Building your own Ubuntu OVA section.

To use the deployed OVA template to create the VMs for the EKS Anywhere cluster, you have to tag it with specific values for the os and eksdRelease keys. The value of the os key is the operating system of the deployed OVA template, which is ubuntu in our scenario. The value of the eksdRelease holds kubernetes and the EKS-D release used in the deployed OVA template. Check the following Customize OVAs page for more details.

Steps to tag the deployed OVA template:

  1. Go to the artifacts page and take notes of the tags and values associated with the OVA template you deployed in the previous step.
  2. In the vSphere Client, select MenuTags & Custom Attributes.
  3. Select the Tags tab and click Tags.
  4. Click New.
  5. In the Create Tag dialog box, copy the os tag name associated with your OVA that you took notes of, which in our case is os:ubuntu and paste it as the name for the first tag required.
  6. Specify the tag category os if it exist or create it if it does not exist.
  7. Click Create.
  8. Now to add the release tag, repeat steps 2-4.
  9. In the Create Tag dialog box, copy the os tag name associated with your OVA that you took notes of, which in our case is eksdRelease:kubernetes-1-21-eks-8 and paste it as the name for the second tag required.
  10. Specify the tag category eksdRelease if it exist or create it if it does not exist.
  11. Click Create.
  12. Navigate to the VM and Template tab.
  13. Select the folder that was created.
  14. Select deployed template and click Actions.
  15. From the drop-down menu, select Tags and Custom AttributesAssign Tag.
  16. Select the tags we created from the list and confirm the operation.

4.6.4 - Create vSphere cluster

Create an EKS Anywhere cluster on VMware vSphere

EKS Anywhere supports a VMware vSphere provider for EKS Anywhere deployments. This document walks you through setting up EKS Anywhere on vSphere in a way that:

  • Deploys an initial cluster on your vSphere environment. That cluster can be used as a self-managed cluster (to run workloads) or a management cluster (to create and manage other clusters)
  • Deploys zero or more workload clusters from the management cluster

If your initial cluster is a management cluster, it is intended to stay in place so you can use it later to modify, upgrade, and delete workload clusters. Using a management cluster makes it faster to provision and delete workload clusters. Also it lets you keep vSphere credentials for a set of clusters in one place: on the management cluster. The alternative is to simply use your initial cluster to run workloads. See Cluster topologies for details.

Note: Before you create your cluster, you have the option of validating the EKS Anywhere bundle manifest container images by following instructions in the Verify Cluster Images page.

Prerequisite Checklist

EKS Anywhere needs to:

Also, see the Ports and protocols page for information on ports that need to be accessible from control plane, worker, and Admin machines.

Steps

The following steps are divided into two sections:

  • Create an initial cluster (used as a management or self-managed cluster)
  • Create zero or more workload clusters from the management cluster

Create an initial cluster

Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a self-managed cluster (for running workloads itself).

  1. Optional Configuration

    Set License Environment Variable

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    

    After you have created your eksa-mgmt-cluster.yaml and set your credential environment variables, you will be ready to create the cluster.

    Configure Curated Packages

    The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here . Cluster creation will succeed if authentication is not set up, but some warnings may be genered. Detailed package configurations can be found here .

    If you are going to use packages, set up authentication. These credentials should have limited capabilities :

    export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
    export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
    export EKSA_AWS_REGION="us-west-2"  
    
  2. Generate an initial cluster config (named mgmt for this example):

    CLUSTER_NAME=mgmt
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider vsphere > eksa-mgmt-cluster.yaml
    
  3. Modify the initial cluster config (eksa-mgmt-cluster.yaml) as follows:

    • Refer to vsphere configuration for information on configuring this cluster config for a vSphere provider.
    • Add Optional configuration settings as needed. See Github provider to see how to identify your Git information.
    • Create at least two control plane nodes, three worker nodes, and three etcd nodes, to provide high availability and rolling upgrades.
  4. Set Credential Environment Variables

    Before you create the initial cluster, you will need to set and export these environment variables for your vSphere user name and password. Make sure you use single quotes around the values so that your shell does not interpret the values:

    export EKSA_VSPHERE_USERNAME='billy'
    export EKSA_VSPHERE_PASSWORD='t0p$ecret'
    
  5. Create cluster

    For a regular cluster create (with internet access), type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation      
    

    For an airgapped cluster create, follow Preparation for airgapped deployments instructions, then type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml \
       --bundles-override ./eks-anywhere-downloads/bundle-release.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation      
    
  6. Once the cluster is created you can use it with the generated KUBECONFIG file in your local directory:

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    
  7. Check the cluster nodes:

    To check that the cluster completed, list the machines to see the control plane, etcd, and worker nodes:

    kubectl get machines -A
    

    Example command output

    NAMESPACE   NAME                PROVIDERID        PHASE    VERSION
    eksa-system mgmt-b2xyz          vsphere:/xxxxx    Running  v1.24.2-eks-1-24-5
    eksa-system mgmt-etcd-r9b42     vsphere:/xxxxx    Running  
    eksa-system mgmt-md-8-6xr-rnr   vsphere:/xxxxx    Running  v1.24.2-eks-1-24-5
    ...
    

    The etcd machine doesn’t show the Kubernetes version because it doesn’t run the kubelet service.

  8. Check the initial cluster’s CRD:

    To ensure you are looking at the initial cluster, list the CRD to see that the name of its management cluster is itself:

    kubectl get clusters mgmt -o yaml
    

    Example command output

    ...
    kubernetesVersion: "1.28"
    managementCluster:
      name: mgmt
    workerNodeGroupConfigurations:
    ...
    

Create separate workload clusters

Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.

  1. Set License Environment Variable (Optional)

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    
  2. Generate a workload cluster config:

    CLUSTER_NAME=w01
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider vsphere > eksa-w01-cluster.yaml
    

    Refer to the initial config described earlier for the required and optional settings.

    NOTE: Ensure workload cluster object names (Cluster, vSphereDatacenterConfig, vSphereMachineConfig, etc.) are distinct from management cluster object names.

  3. Be sure to set the managementCluster field to identify the name of the management cluster.

    For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: w01
    spec:
      managementCluster:
        name: mgmt
    
  4. Create a workload cluster in one of the following ways:

    • GitOps: See Manage separate workload clusters with GitOps

    • Terraform: See Manage separate workload clusters with Terraform

      NOTE: spec.users[0].sshAuthorizedKeys must be specified to SSH into your nodes when provisioning a cluster through GitOps or Terraform, as the EKS Anywhere Cluster Controller will not generate the keys like eksctl CLI does when the field is empty.

    • eksctl CLI: To create a workload cluster with eksctl, run:

      eksctl anywhere create cluster \
          -f eksa-w01-cluster.yaml  \
          --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig \
          # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation      
      

      As noted earlier, adding the --kubeconfig option tells eksctl to use the management cluster identified by that kubeconfig file to create a different workload cluster.

    • kubectl CLI: The cluster lifecycle feature lets you use kubectl, or other tools that that can talk to the Kubernetes API, to create a workload cluster. To use kubectl, run:

      kubectl apply -f eksa-w01-cluster.yaml 
      

      To check the state of a cluster managed with the cluster lifecyle feature, use kubectl to show the cluster object with its status.

      The status field on the cluster object field holds information about the current state of the cluster.

      kubectl get clusters w01 -o yaml
      

      The cluster has been fully upgraded once the status of the Ready condition is marked True. See the cluster status guide for more information.

  5. To check the workload cluster, get the workload cluster credentials and run a test workload:

    • If your workload cluster was created with eksctl, change your credentials to point to the new workload cluster (for example, w01), then run the test application with:

      export CLUSTER_NAME=w01
      export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
      kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
      
    • If your workload cluster was created with GitOps or Terraform, the kubeconfig for your new cluster is stored as a secret on the management cluster. You can get credentials and run the test application as follows:

      kubectl get secret -n eksa-system w01-kubeconfig -o jsonpath='{.data.value}' | base64 —decode > w01.kubeconfig
      export KUBECONFIG=w01.kubeconfig
      kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
      
  6. Add more workload clusters:

    To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml), modifying resource names, and running the create cluster command again.

Next steps

  • See the Cluster management section for more information on common operational tasks like scaling and deleting the cluster.

  • See the Package management section for more information on post-creation curated packages installation.

4.6.5 - Configure for vSphere

Full EKS Anywhere configuration reference for a VMware vSphere cluster.

This is a generic template with detailed descriptions below for reference.

Key: Resources are in green ; Links to field descriptions are in blue

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name             # Name of the cluster (required)
spec:
   clusterNetwork:                   # Cluster network configuration (required)
      cniConfig:                     # Cluster CNI plugin - default: cilium (required)
         cilium: {}
      pods:
         cidrBlocks:                 # Internal Kubernetes subnet CIDR block for pods (required)
            - 192.168.0.0/16
      services:
         cidrBlocks:                 # Internal Kubernetes subnet CIDR block for services (required)
            - 10.96.0.0/12
   controlPlaneConfiguration:        # Specific cluster control plane config (required)
      count: 2                       # Number of control plane nodes (required)
      endpoint:                      # IP for control plane endpoint on your network (required)
         host: xxx.xxx.xxx.xxx
      machineGroupRef:               # vSphere-specific Kubernetes node config (required)
        kind: VSphereMachineConfig
        name: my-cluster-machines
      taints:                        # Taints applied to control plane nodes 
      - key: "key1"
        value: "value1"
        effect: "NoSchedule"
      labels:                        # Labels applied to control plane nodes 
        "key1": "value1"
        "key2": "value2"
   datacenterRef:                    # Kubernetes object with vSphere-specific config 
      kind: VSphereDatacenterConfig
      name: my-cluster-datacenter
   externalEtcdConfiguration:
     count: 3                        # Number of etcd members 
     machineGroupRef:                # vSphere-specific Kubernetes etcd config
        kind: VSphereMachineConfig
        name: my-cluster-machines
   kubernetesVersion: "1.25"         # Kubernetes version to use for the cluster (required)
   workerNodeGroupConfigurations:    # List of node groups you can define for workers (required) 
   - count: 2                        # Number of worker nodes 
     machineGroupRef:                # vSphere-specific Kubernetes node objects (required) 
       kind: VSphereMachineConfig
       name: my-cluster-machines
     name: md-0                      # Name of the worker nodegroup (required) 
     taints:                         # Taints to apply to worker node group nodes 
     - key: "key1"
       value: "value1"
       effect: "NoSchedule"
     labels:                         # Labels to apply to worker node group nodes 
       "key1": "value1"
       "key2": "value2"
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereDatacenterConfig
metadata:
   name: my-cluster-datacenter
spec:
  datacenter: "datacenter1"          # vSphere datacenter name on which to deploy EKS Anywhere (required) 
  server: "myvsphere.local"          # FQDN or IP address of vCenter server (required) 
  network: "network1"                # Path to the VM network on which to deploy EKS Anywhere (required) 
  insecure: false                    # Set to true if vCenter does not have a valid certificate 
  thumbprint: "1E:3B:A1:4C:B2:..."   # SHA1 thumprint of vCenter server certificate (required if insecure=false)

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereMachineConfig
metadata:
   name: my-cluster-machines
spec:
  diskGiB:  25                         # Size of disk on VMs, if no snapshots
  datastore: "datastore1"              # Path to vSphere datastore to deploy EKS Anywhere on (required)
  folder: "folder1"                    # Path to VM folder for EKS Anywhere cluster VMs (required)
  numCPUs: 2                           # Number of CPUs on virtual machines
  memoryMiB: 8192                      # Size of RAM on VMs
  osFamily: "bottlerocket"             # Operating system on VMs
  resourcePool: "resourcePool1"        # vSphere resource pool for EKS Anywhere VMs (required)
  storagePolicyName: "storagePolicy1"  # Storage policy name associated with VMs
  template: "bottlerocket-kube-v1-25"  # VM template for EKS Anywhere (required for RHEL/Ubuntu-based OVAs)
  cloneMode: "fullClone"               # Clone mode to use when cloning VMs from the template
  users:                               # Add users to access VMs via SSH
  - name: "ec2-user"                   # Name of each user set to access VMs
    sshAuthorizedKeys:                 # SSH keys for user needed to access VMs
    - "ssh-rsa AAAAB3NzaC1yc2E..."
  tags:                                # List of tags to attach to cluster VMs, in URN format
  - "urn:vmomi:InventoryServiceTag:5b3e951f-4e1d-4511-95b1-5ba1ea97245c:GLOBAL"
  - "urn:vmomi:InventoryServiceTag:cfee03d0-0189-4f27-8c65-fe75086a86cd:GLOBAL"

The following additional optional configuration can also be included:

Cluster Fields

name (required)

Name of your cluster my-cluster-name in this example

clusterNetwork (required)

Network configuration.

clusterNetwork.cniConfig (required)

CNI plugin configuration. Supports cilium.

clusterNetwork.cniConfig.cilium.policyEnforcementMode (optional)

Optionally specify a policyEnforcementMode of default, always or never.

clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces (optional)

Optionally specify a network interface name or interface prefix used for masquerading. See EgressMasqueradeInterfaces option.

clusterNetwork.cniConfig.cilium.skipUpgrade (optional)

When true, skip Cilium maintenance during upgrades. Also see Use a custom CNI.

clusterNetwork.cniConfig.cilium.routingMode (optional)

Optionally specify the routing mode. Accepts default and direct. Also see RoutingMode option.

clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR (optional)

Optionally specify the CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR (optional)

Optionally specify the IPv6 CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.pods.cidrBlocks[0] (required)

The pod subnet specified in CIDR notation. Only 1 pod CIDR block is permitted. The CIDR block should not conflict with the host or service network ranges.

clusterNetwork.services.cidrBlocks[0] (required)

The service subnet specified in CIDR notation. Only 1 service CIDR block is permitted. This CIDR block should not conflict with the host or pod network ranges.

clusterNetwork.dns.resolvConf.path (optional)

File path to a file containing a custom DNS resolver configuration.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with vsphere specific configuration for your nodes. See VSphereMachineConfig Fields below.

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other VMs.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster creation process are here

controlPlaneConfiguration.taints (optional)

A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint. For k8s versions prior to 1.24, it replaces node-role.kubernetes.io/master. For k8s versions 1.24+, it replaces node-role.kubernetes.io/control-plane. The default control plane components will tolerate the provided taints.

Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.

NOTE: The taints provided will be used instead of the default control plane taint. Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.

controlPlaneConfiguration.labels (optional)

A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing the existing nodes.

workerNodeGroupConfigurations (required)

This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.

workerNodeGroupConfigurations[*].count (optional)

Number of worker nodes. (default: 1) It will be ignored if the cluster autoscaler curated package is installed and autoscalingConfiguration is used to specify the desired range of replicas.

Refers to troubleshooting machine health check remediation not allowed and choose a sufficient number to allow machine health check remediation.

workerNodeGroupConfigurations[*].machineGroupRef (required)

Refers to the Kubernetes object with vsphere specific configuration for your nodes. See VSphereMachineConfig Fields below.

workerNodeGroupConfigurations[*].name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional)

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional)

Maximum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].taints (optional)

A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must NOT have NoSchedule or NoExecute taints applied to it.

workerNodeGroupConfigurations[*].labels (optional)

A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration.

workerNodeGroupConfigurations[*].kubernetesVersion (optional)

The Kubernetes version you want to use for this worker node group. Supported values : 1.28, 1.27, 1.26, 1.25, 1.24

Must be less than or equal to the cluster kubernetesVersion defined at the root level of the cluster spec. The worker node kubernetesVersion must be no more than two minor Kubernetes versions lower than the cluster control plane’s Kubernetes version. Removing workerNodeGroupConfiguration.kubernetesVersion will trigger an upgrade of the node group to the kubernetesVersion defined at the root level of the cluster spec.

externalEtcdConfiguration.count (optional)

Number of etcd members

externalEtcdConfiguration.machineGroupRef (optional)

Refers to the Kubernetes object with vsphere specific configuration for your etcd members. See VSphereMachineConfig Fields below.

datacenterRef (required)

Refers to the Kubernetes object with vsphere environment specific configuration. See VSphereDatacenterConfig Fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values : 1.28, 1.27, 1.26, 1.25, 1.24

VSphereDatacenterConfig Fields

datacenter (required)

The name of the vSphere datacenter to deploy the EKS Anywhere cluster on. For example SDDC-Datacenter.

network (required)

The path to the VM network to deploy your EKS Anywhere cluster on. For example, /<DATACENTER>/network/<NETWORK_NAME>. Use govc find -type n to see a list of networks.

server (required)

The vCenter server fully qualified domain name or IP address. If the server IP is used, the thumbprint must be set or insecure must be set to true.

insecure (optional)

Set insecure to true if the vCenter server does not have a valid certificate. (Default: false)

thumbprint (required if insecure=false)

The SHA1 thumbprint of the vCenter server certificate which is only required if you have a self signed certificate.

There are several ways to obtain your vCenter thumbprint. The easiest way is if you have govc installed, you can run:

govc about.cert -thumbprint -k

Another way is from the vCenter web UI, go to Administration/Certificate Management and click view details of the machine certificate. The format of this thumbprint does not exactly match the format required though and you will need to add : to separate each hexadecimal value.

Another way to get the thumbprint is use this command with your servers certificate in a file named ca.crt:

openssl x509 -sha1 -fingerprint -in ca.crt -noout

If you specify the wrong thumbprint, an error message will be printed with the expected thumbprint. If no valid certificate is being used, insecure must be set to true.

VSphereMachineConfig Fields

memoryMiB (optional)

Size of RAM on virtual machines (Default: 8192)

numCPUs (optional)

Number of CPUs on virtual machines (Default: 2)

osFamily (optional)

Operating System on virtual machines. Permitted values: bottlerocket, ubuntu, redhat (Default: bottlerocket)

diskGiB (optional)

Size of disk on virtual machines if snapshots aren’t included (Default: 25)

users (optional)

The users you want to configure to access your virtual machines. Only one is permitted at this time

users[0].name (optional)

The name of the user you want to configure to access your virtual machines through ssh.

The default is ec2-user if osFamily=bottlrocket and capv if osFamily=ubuntu

users[0].sshAuthorizedKeys (optional)

The SSH public keys you want to configure to access your virtual machines through ssh (as described below). Only 1 is supported at this time.

users[0].sshAuthorizedKeys[0] (optional)

This is the SSH public key that will be placed in authorized_keys on all EKS Anywhere cluster VMs so you can ssh into them. The user will be what is defined under name above. For example:

ssh -i <private-key-file> <user>@<VM-IP>

The default is generating a key in your $(pwd)/<cluster-name> folder when not specifying a value

template (optional)

The VM template to use for your EKS Anywhere cluster. This template was created when you imported the OVA file into vSphere . This is a required field if you are using Ubuntu-based or RHEL-based OVAs. The template must contain the Cluster.Spec.KubernetesVersion or Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, template must include 1.24, 1_24, 1-24 or 124.

cloneMode (optional)

cloneMode defines the clone mode to use when creating the cluster VMs from the template. Allowed values are:

  • fullClone: With full clone, the cloned VM is a separate independent copy of the template. This makes provisioning the VMs a bit slower at the cost of better customization and performance.
  • linkedClone: With linked clone, the cloned VM shares the parent template’s virtual disk. This makes provisioning the VMs faster while also saving the disk space. Linked clone does not allow customizing the disk size. The template should meet the following properties to use linkedClone:
    • The template needs to have a snapshot
    • The template’s disk size must match the VSphereMachineConfig’s diskGiB

If this field is not specified, EKS Anywhere tries to determine the clone mode based on the following criteria:

  • It uses linkedClone if the template has snapshots and the template diskSize matches the machineConfig DiskGiB.
  • Otherwise, it uses use full clone.

datastore (required)

The path to the vSphere datastore to deploy your EKS Anywhere cluster on, for example /<DATACENTER>/datastore/<DATASTORE_NAME>. Use govc find -type s to get a list of datastores.

folder (required)

The path to a VM folder for your EKS Anywhere cluster VMs. This allows you to organize your VMs. If the folder does not exist, it will be created for you. If the folder is blank, the VMs will go in the root folder. For example /<DATACENTER>/vm/<FOLDER_NAME>/.... Use govc find -type f to get a list of existing folders.

resourcePool (required)

The vSphere Resource pools for your VMs in the EKS Anywhere cluster. Examples of resource pool values include:

  • If there is no resource pool: /<datacenter>/host/<cluster-name>/Resources
  • If there is a resource pool: /<datacenter>/host/<cluster-name>/Resources/<resource-pool-name>
  • The wild card option */Resources also often works.

Use govc find -type p to get a list of available resource pools.

storagePolicyName (optional)

The storage policy name associated with your VMs. Generally this can be left blank. Use govc storage.policy.ls to get a list of available storage policies.

tags (optional)

Optional list of tags to attach to your cluster VMs in the URN format.

Example:

  tags:
  - urn:vmomi:InventoryServiceTag:8e0ce079-0675-47d6-8665-16ada4e6dabd:GLOBAL

hostOSConfig (optional)

Optional host OS configurations for the EKS Anywhere Kubernetes nodes. More information in the Host OS Configuration section.

Optional VSphere Credentials

Use the following environment variables to configure the Cloud Provider with different credentials.

EKSA_VSPHERE_CP_USERNAME

Username for Cloud Provider (Default: $EKSA_VSPHERE_USERNAME).

EKSA_VSPHERE_CP_PASSWORD

Password for Cloud Provider (Default: $EKSA_VSPHERE_PASSWORD).

4.6.6 - Customize vSphere

Customizing EKS Anywhere on vSphere

4.6.6.1 - Import OVAs

Importing EKS Anywhere OVAs to vSphere

If you want to specify an OVA template, you will need to import OVA files into vSphere before you can use it in your EKS Anywhere cluster. This guide was written using VMware Cloud on AWS, but the VMware OVA import guide can be found here.

EKS Anywhere supports the following operating system families

  • Bottlerocket (default)
  • Ubuntu
  • RHEL

A list of OVAs for this release can be found on the artifacts page.

Using vCenter Web User Interface

  1. Right click on your Datacenter, select Deploy OVF Template Import ova drop down

  2. Select an OVF template using URL or selecting a local OVF file and click on Next. If you are not able to select an OVF template using URL, download the file and use Local file option.

    Note: If you are using Bottlerocket OVAs, please select local file option. Import ova wizard

  3. Select a folder where you want to deploy your OVF package (most of our OVF templates are under SDDC-Datacenter directory) and click on Next. You cannot have an OVF template with the same name in one directory. For workload VM templates, leave the Kubernetes version in the template name for reference. A workload VM template will support at least one prior Kubernetes major versions. Import ova wizard

  4. Select any compute resource to run (from cluster-1, 10.2.34.5, etc..) the deployed VM and click on Next Import ova wizard

  5. Review the details and click Next.

  6. Accept the agreement and click Next.

  7. Select the appropriate storage (e.g. “WorkloadDatastore“) and click Next.

  8. Select destination network (e.g. “sddc-cgw-network-1”) and click Next.

  9. Finish.

  10. Snapshot the VM. Right click on the imported VM and select Snapshots -> Take Snapshot… (It is highly recommended that you snapshot the VM. This will reduce the time it takes to provision machines and cluster creation will be faster. If you prefer not to take snapshot, skip to step 13) Import ova wizard

  11. Name your template (e.g. “root”) and click Create. Import ova wizard

  12. Snapshots for the imported VM should now show up under the Snapshots tab for the VM. Import ova wizard

  13. Right click on the imported VM and select Template and Convert to Template Import ova wizard

Steps to deploy a template using GOVC (CLI)

To deploy a template using govc, you must first ensure that you have GOVC installed . You need to set and export three environment variables to run govc GOVC_USERNAME, GOVC_PASSWORD and GOVC_URL.

  1. Import the template to a content library in vCenter using URL or selecting a local OVA file

    Using URL:

    govc library.import -k -pull <library name> <URL for the OVA file>
    

    Using a file from the local machine:

    govc library.import <library name> <path to OVA file on local machine>
    
  2. Deploy the template

    govc library.deploy -pool <resource pool> -folder <folder location to deploy template> /<library name>/<template name> <name of new VM>
    

    2a. If using Bottlerocket template for newer Kubernetes version than 1.21, resize disk 1 to 22G

    govc vm.disk.change -vm <template name> -disk.label "Hard disk 1" -size 22G
    

    2b. If using Bottlerocket template for Kubernetes version 1.21, resize disk 2 to 20G

    govc vm.disk.change -vm <template name> -disk.label "Hard disk 2" -size 20G
    
  3. Take a snapshot of the VM (It is highly recommended that you snapshot the VM. This will reduce the time it takes to provision machines and cluster creation will be faster. If you prefer not to take snapshot, skip this step)

    govc snapshot.create -vm ubuntu-2004-kube-v1.25.6 root
    
  4. Mark the new VM as a template

    govc vm.markastemplate <name of new VM>
    

Important Additional Steps to Tag the OVA

Using vCenter UI

Tag to indicate OS family

  1. Select the template that was newly created in the steps above and navigate to Summary -> Tags. Import ova wizard
  2. Click Assign -> Add Tag to create a new tag and attach it Import ova wizard
  3. Name the tag os:ubuntu or os:bottlerocket Import ova wizard

Tag to indicate eksd release

  1. Select the template that was newly created in the steps above and navigate to Summary -> Tags. Import ova wizard
  2. Click Assign -> Add Tag to create a new tag and attach it Import ova wizard
  3. Name the tag eksdRelease:{eksd release for the selected ova}, for example eksdRelease:kubernetes-1-25-eks-5 for the 1.25 ova. You can find the rest of eksd releases in the previous section . If it’s the first time you add an eksdRelease tag, you would need to create the category first. Click on “Create New Category” and name it eksdRelease. Import ova wizard

Using govc

Tag to indicate OS family

  1. Create tag category
govc tags.category.create -t VirtualMachine os
  1. Create tags os:ubuntu and os:bottlerocket
govc tags.create -c os os:bottlerocket
govc tags.create -c os os:ubuntu
  1. Attach newly created tag to the template
govc tags.attach os:bottlerocket <Template Path>
govc tags.attach os:ubuntu <Template Path>
  1. Verify tag is attached to the template
govc tags.ls <Template Path> 

Tag to indicate eksd release

  1. Create tag category
govc tags.category.create -t VirtualMachine eksdRelease
  1. Create the proper eksd release Tag, depending on your template. You can find the eksd releases in the previous section . For example eksdRelease:kubernetes-1-25-eks-5 for the 1.25 template.
govc tags.create -c eksdRelease eksdRelease:kubernetes-1-25-eks-5
  1. Attach newly created tag to the template
govc tags.attach eksdRelease:kubernetes-1-25-eks-5 <Template Path>
  1. Verify tag is attached to the template
govc tags.ls <Template Path> 

After you are done you can use the template for your workload cluster.

4.6.6.2 - Custom Ubuntu OVAs

Customizing Imported Ubuntu OVAs

There may be a need to make specific configuration changes on the imported ova template before using it to create/update EKS-A clusters.

Set up SSH Access for Imported OVA

SSH user and key need to be configured in order to allow SSH login to the VM template

Clone template to VM

Create an environment variable to hold the name of modified VM/template

export VM=<vm-name>

Clone the imported OVA template to create VM

govc vm.clone -on=false -vm=<full-path-to-imported-template> - folder=<full-path-to-folder-that-will-contain-the-VM> -ds=<datastore> $VM

Configure VM with cloud-init and the VMX GuestInfo datasource

Create a metadata.yaml file

instance-id: cloud-vm
local-hostname: cloud-vm
network:
  version: 2
  ethernets:
    nics:
      match:
        name: ens*
      dhcp4: yes

Create a userdata.yaml file

#cloud-config

users:
  - default
  - name: <username>
    primary_group: <username>
    sudo: ALL=(ALL) NOPASSWD:ALL
    groups: sudo, wheel
    ssh_import_id: None
    lock_passwd: true
    ssh_authorized_keys:
    - <user's ssh public key>

Export environment variable containing the cloud-init metadata and userdata

export METADATA=$(gzip -c9 <metadata.yaml | { base64 -w0 2>/dev/null || base64; }) \
       USERDATA=$(gzip -c9 <userdata.yaml | { base64 -w0 2>/dev/null || base64; })

Assign metadata and userdata to VM’s guestinfo

govc vm.change -vm "${VM}" \
  -e guestinfo.metadata="${METADATA}" \
  -e guestinfo.metadata.encoding="gzip+base64" \
  -e guestinfo.userdata="${USERDATA}" \
  -e guestinfo.userdata.encoding="gzip+base64"

Power the VM on

govc vm.power -on “$VM”

Customize the VM

Once the VM is powered on and fetches an IP address, ssh into the VM using your private key corresponding to the public key specified in userdata.yaml

ssh -i <private-key-file> username@<VM-IP>

At this point, you can make the desired configuration changes on the VM. The following sections describe some of the things you may want to do:

Add a Certificate Authority

Copy your CA certificate under /usr/local/share/ca-certificates and run sudo update-ca-certificates which will place the certificate under the /etc/ssl/certs directory.

Add Authentication Credentials for a Private Registry

If /etc/containerd/config.toml is not present initially, the default configuration can be generated by running the containerd config default > /etc/containerd/config.toml command. To configure a credential for a specific registry, create/modify the /etc/containerd/config.toml as follows:

# explicitly use v2 config format
version = 2

# The registry host has to be a domain name or IP. Port number is also
# needed if the default HTTPS or HTTP port is not used.
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry1-host:port".auth]
  username = ""
  password = ""
  auth = ""
  identitytoken = ""
 # The registry host has to be a domain name or IP. Port number is also
 # needed if the default HTTPS or HTTP port is not used.
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry2-host:port".auth]
  username = ""
  password = ""
  auth = ""
  identitytoken = ""

Restart containerd service with the sudo systemctl restart containerd command.

Convert VM to a Template

After you have customized the VM, you need to convert it to a template.

Cleanup the machine and power off the VM

This step is needed because of a known issue in Ubuntu which results in the clone VMs getting the same DHCP IP

sudo su
echo -n > /etc/machine-id
rm /var/lib/dbus/machine-id
ln -s /etc/machine-id /var/lib/dbus/machine-id
cloud-init clean -l --machine-id

Delete the hostname from file

/etc/hostname

Delete the networking config file

rm -rf /etc/netplan/50-cloud-init.yaml

Edit the cloud init config to turn preserve_hostname to false

vi /etc/cloud/cloud.cfg

Power the VM down

govc vm.power -off "$VM"

Take a snapshot of the VM

It is recommended to take a snapshot of the VM as it reduces the provisioning time for the machines and makes cluster creation faster.

If you do snapshot the VM, you will not be able to customize the disk size of your cluster VMs. If you prefer not to take a snapshot, skip this step.

govc snapshot.create -vm "$VM" root

Convert VM to template

govc vm.markastemplate $VM

Tag the template appropriately as described here

Use this customized template to create/upgrade EKS Anywhere clusters

4.6.7 -

  • vCenter endpoint (must be accessible to EKS Anywhere clusters)
  • public.ecr.aws
  • anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere binaries, manifests and OVAs)
  • distro.eks.amazonaws.com (to download EKS Distro binaries and manifests)
  • d2glxqk2uabbnd.cloudfront.net (for EKS Anywhere and EKS Distro ECR container images)
  • api.ecr.us-west-2.amazonaws.com (for EKS Anywhere package authentication matching your region)
  • d5l0dvt14r5h8.cloudfront.net (for EKS Anywhere package ECR container images)
  • api.github.com (only if GitOps is enabled)

4.7 - Create Bare Metal cluster

Create an EKS Anywhere cluster on Bare Metal

4.7.1 - Overview

Overview of EKS Anywhere cluster creation on bare metal

Creating a Bare Metal cluster

The following diagram illustrates what happens when you create an EKS Anywhere cluster on bare metal. You can run EKS Anywhere on bare metal as a single node cluster with the Kubernetes control plane and workloads co-located on a single server, as a multi-node cluster with the Kubernetes control plane and workloads co-located on the same servers, and as a multi-node cluster with the Kubernetes control plane and worker nodes on different, dedicated servers.

Start creating a Bare Metal cluster

Start creating EKS Anywhere Bare Metal cluster

1. Generate a config file for Bare Metal

Identify the provider (--provider tinkerbell) and the cluster name to the eksctl anywhere generate clusterconfig command and direct the output into a cluster config .yaml file.

2. Modify the config file and hardware CSV file

Modify the generated cluster config file to suit your situation. Details about this config file are contained on the Bare Metal Config page. Create a hardware configuration file (hardware.csv) as described in Prepare hardware inventory .

3. Launch the cluster creation

Run the eksctl anywhere cluster create command, providing the cluster config and hardware CSV files. To see details on the cluster creation process, increase verbosity (-v=9 provides maximum verbosity).

4. Create bootstrap cluster and provision hardware

The cluster creation process starts by creating a temporary Kubernetes bootstrap cluster on the Administrative machine. Containerized components of the Tinkerbell provisioner run either as pods on the bootstrap cluster (Hegel, Rufio, and Tink) or directly as containers on Docker (Boots). Those Tinkerbell components drive the provisioning of the operating systems and Kubernetes components on each of the physical computers.

With the information gathered from the cluster specification and the hardware CSV file, three custom resource definitions (CRDs) are created. These include:

  • Hardware custom resources: Which store hardware information for each machine
  • Template custom resources: Which store the tasks and actions
  • Workflow custom resources: Which put together the complete hardware and template information for each machine. There are different workflows for control plane and worker nodes.

As the bootstrap cluster comes up and Tinkerbell components are started, you should see messages like the following:

$ eksctl anywhere create cluster --hardware-csv hardware.csv -f eksa-mgmt-cluster.yaml
Performing setup and validations
Tinkerbell Provider setup is valid
Validate certificate for registry mirror
Create preflight validations pass
Creating new bootstrap cluster
Provider specific pre-capi-install-setup on bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific post-setup
Creating new workload cluster

At this point, Tinkerbell will try to boot up the machines in the target cluster.

Continuing cluster creation

Tinkerbell takes over the activities for creating provisioning the Bare Metal machines to become the new target cluster. See Overview of Tinkerbell in EKS Anywhere for examples of commands you can run to watch over this process.

Continue creating EKS Anywhere Bare Metal cluster

1. Tinkerbell network boots and configures nodes

  • Rufio uses BMC information to set the power state for the first control plane node it wants to provision.
  • When the node boots from its NIC, it talks to the Boots DHCP server, which fetches the kernel and initramfs (HookOS) needed to network boot the machine.
  • With HookOS running on the node, the operating system identified by IMG_URL in the cluster specification is copied to the identified DEST_DISK on the machine.
  • The Hegel components provides data stores that contain information used by services such as cloud-init to configure each system.
  • Next, the workflow is run on the first control plane node, followed by network booting and running the workflow for each subsequent control plane node.
  • Once the control plane is up, worker nodes are network booted and workflows are run to deploy each node.

2. Tinkerbell components move to the target cluster

Once all the defined nodes are added to the cluster, the Tinkerbell components and associated data are moved to run as pods on worker nodes in the new workload cluster.

Deleting Tinkerbell from Admin machine

All Tinkerbell-related pods and containers are then deleted from the Admin machine. Further management of tinkerbell and related information can be done using from the new cluster, using tools such as kubectl.

Delete Tinkerbell pods and container

Using Tinkerbell on EKS Anywhere

The sections below step through how Tinkerbell is integrated with EKS Anywhere to deploy a Bare Metal cluster. While based on features described in Tinkerbell Documentation , EKS Anywhere has modified and added to Tinkerbell components such that the entire Tinkerbell stack is now Kubernetes-friendly and can run on a Kubernetes cluster.

Create bare metal CSV file

The information that Tinkerbell uses to provision machines for the target EKS Anywhere cluster needs to be gathered in a CSV file with the following format:

hostname,bmc_ip,bmc_username,bmc_password,mac,ip_address,netmask,gateway,nameservers,labels,disk
eksa-cp01,10.10.44.1,root,PrZ8W93i,CC:48:3A:00:00:01,10.10.50.2,255.255.254.0,10.10.50.1,8.8.8.8,type=cp,/dev/sda
...

Each physical, bare metal machine is represented by a comma-separated list of information on a single line. It includes information needed to identify each machine (the NIC’s MAC address), network boot the machine, point to the disk to install on, and then configure and start the installed system. See Preparing hardware inventory for details on the content and format of that file.

Modify the cluster specification file

Before you create a cluster using the Bare Metal configuration file, you can make Tinkerbell-related changes to that file. In particular, TinkerbellDatacenterConfig fields, TinkerbellMachineConfig fields, and Tinkerbell Actions can be added or modified.

Tinkerbell actions vary based on the operating system you choose for your EKS Anywhere cluster. Actions are stored internally and not shown in the generated cluster specification file, so you must add those sections yourself to change from the defaults (see Ubuntu TinkerbellTemplateConfig example and Bottlerocket TinkerbellTemplateConfig example for details).

In most cases, you don’t need to touch the default actions. However, you might want to modify an action (for example to change kexec to a reboot action if the hardware requires it) or add an action to further configure the installed system. Examples in Advanced Bare Metal cluster configuration show a few actions you might want to add.

Once you have made all your modifications, you can go ahead and create the cluster. The next section describes how Tinkerbell works during cluster creation to provision your Bare Metal machines and prepare them to join the EKS Anywhere cluster.

4.7.2 - Tinkerbell Concepts

Overview of Tinkerbell and network booting for EKS Anywhere on Bare Metal

EKS Anywhere uses Tinkerbell to provision machines for a Bare Metal cluster. Understanding what Tinkerbell is and how it works with EKS Anywhere can help you take advantage of advanced provisioning features or overcome provisioning problems you encounter.

As someone deploying an EKS Anywhere cluster on Bare Metal, you have several opportunities to interact with Tinkerbell:

  • Create a hardware CSV file: You are required to create a hardware CSV file that contains an entry for every physical machine you want to add at cluster creation time.
  • Create an EKS Anywhere cluster: By modifying the Bare Metal configuration file used to create a cluster, you can change some Tinkerbell settings or add actions to define how the operating system on each machine is configured.
  • Monitor provisioning: You can follow along with the Tinkerbell Overview in this page to monitor the progress of your hardware provisioning, as Tinkerbell finds machines and attempts to network boot, configure, and restart them.

When you run the command to create an EKS Anywhere Bare Metal cluster, a set of Tinkerbell components start up on the Admin machine. One of these components runs in a container on Docker (Boots), while other components run as either controllers or services in pods on the Kubernetes kind cluster that is started up on the Admin machine. Tinkerbell components include Boots, Hegel, Rufio, and Tink.

Tinkerbell Boots service

The Boots service runs in a single container to handle the DHCP service and network booting activities. In particular, Boots hands out IP addresses, serves iPXE binaries via HTTP and TFTP, delivers an iPXE script to the provisioned machines, and runs a syslog server.

Boots is different from the other Tinkerbell services because the DHCP service it runs must listen directly to layer 2 traffic. (The kind cluster running on the Admin machine doesn’t have the ability to have pods listening on layer 2 networks, which is why Boots is run directly on Docker instead, with host networking enabled.)

Because Boots is running as a container in Docker, you can see the output in the logs for the Boots container by running:

docker logs boots

From the logs output, you will see iPXE try to network boot each machine. If the process doesn’t get all the information it wants from the DHCP server, it will time out. You can see iPXE loading variables, loading a kernel and initramfs (via DHCP), then booting into that kernel and initramfs: in other words, you will see everything that happens with iPXE before it switches over to the kernel and initramfs. The kernel, initramfs, and all images retrieved later are obtained remotely over HTTP and HTTPS.

Tinkerbell Hegel, Rufio, and Tink components

After Boots comes up on Docker, a small Kubernetes kind cluster starts up on the Admin machine. Other Tinkerbell components run as pods on that kind cluster. Those components include:

  • Hegel: Manages Tinkerbell’s metadata service. The Hegel service gets its metadata from the hardware specification stored in Kubernetes in the form of custom resources. The format that it serves is similar to an Ec2 metadata format.
  • Rufio: Handles talking to BMCs (which manages things like starting and stopping systems with IPMI or Redfish). The Rufio Kubernetes controller sets things such as power state, persistent boot order. BMC authentication is managed with Kubernetes secrets.
  • Tink: The Tink service consists of three components: Tink server, Tink controller, and Tink worker. The Tink controller manages hardware data, templates you want to execute, and the workflows that each target specific hardware you are provisioning. The Tink worker is a small binary that runs inside of HookOS and talks to the Tink server. The worker sends the Tink server its MAC address and asks the server for workflows to run. The Tink worker will then go through each action, one-by-one, and try to execute it.

To see those services and controllers running on the kind bootstrap cluster, type:

kubectl get pods -n eksa-system
NAME                                      READY STATUS    RESTARTS AGE
hegel-sbchp                               1/1   Running   0        3d
rufio-controller-manager-5dcc568c79-9kllz 1/1   Running   0        3d
tink-controller-manager-54dc786db6-tm2c5  1/1   Running   0        3d
tink-server-5c494445bc-986sl              1/1   Running   0        3d

Provisioning hardware with Tinkerbell

After you start up the cluster create process, the following is the general workflow that Tinkerbell performs to begin provisioning the bare metal machines and prepare them to become part of the EKS Anywhere target cluster. You can set up kubectl on the Admin machine to access the bootstrap cluster and follow along:

export KUBECONFIG=${PWD}/${CLUSTER_NAME}/generated/${CLUSTER_NAME}.kind.kubeconfig

Power up the nodes

Tinkerbell starts by finding a node from the hardware list (based on MAC address) and contacting it to identify a baseboard management job (job.bmc) that runs a set of baseboard management tasks (task.bmc). To see that information, type:

kubectl get job.bmc -A
NAMESPACE    NAME                                           AGE
eksa-system  mycluster-md-0-1656099863422-vxvh2-provision   12m
kubectl get tasks.bmc -A
NAMESPACE    NAME                                                AGE
eksa-system  mycluster-md-0-1656099863422-vxh2-provision-task-0  55s
eksa-system  mycluster-md-0-1656099863422-vxh2-provision-task-1  51s
eksa-system  mycluster-md-0-1656099863422-vxh2-provision-task-2  47s

The following shows snippets from the tasks.bmc output that represent the three tasks: Power Off, enable network boot, and Power On.

kubectl describe tasks.bmc -n eksa-system mycluster-md-0-1656099863422-vxh2-provision-task-0
...
  Task:
    Power Action:  Off
Status:
  Completion Time:   2022-06-27T20:32:59Z
  Conditions:
    Status:    True
    Type:      Completed 
kubectl describe tasks.bmc -n eksa-system mycluster-md-0-1656099863422-vxh2-provision-task-1
...
  Task:
    One Time Boot Device Action:
      Device:
        pxe
      Efi Boot:  true
Status:
  Completion Time:   2022-06-27T20:33:04Z
  Conditions:
    Status:    True
    Type:      Completed   
kubectl describe tasks.bmc -n eksa-system mycluster-md-0-1656099863422-vxh2-provision-task-2
  Task:
    Power Action:  on
Status:
  Completion Time:   2022-06-27T20:33:10Z
  Conditions:
    Status:    True
    Type:      Completed   

Rufio converts the baseboard management jobs into task objects, then goes ahead and executes each task. To see Rufio logs, type:

kubectl logs -n eksa-system rufio-controller-manager-5dcc568c79-9kllz | less

Network booting the nodes

Next the Boots service netboots the machine and begins streaming the HookOS (vmlinuz and initramfs) to the machine. HookOS runs in memory and provides the installation environment. To watch the Boots log messages as each node powers up, type:

docker logs boots 

You can search the output for vmlinuz and initramfs to watch as the HookOS is downloaded and booted from memory on each machine.

Running workflows

Once the HookOS is up, Tinkerbell begins running the tasks and actions contained in the workflows. This is coordinated between the Tink worker, running in memory within the HookOS on the machine, and the Tink server on the kind cluster. To see the workflows being run, type the following:

kubectl get workflows.tinkerbell.org -n eksa-system
NAME                                TEMPLATE                            STATE
mycluster-md-0-1656099863422-vxh2   mycluster-md-0-1656099863422-vxh2   STATE_RUNNING

This shows the workflow for the first machine that is being provisioned. Add -o yaml to see details of that workflow template:

kubectl get workflows.tinkerbell.org -n eksa-system -o yaml
...
status:
  state: STATE_RUNNING
  tasks:
  - actions
    - environment:
        COMPRESSED: "true"
        DEST_DISK: /dev/sda
        IMG_URL: https://anywhere-assets.eks.amazonaws.com/releases/bundles/11/artifacts/raw/1-22/bottlerocket-v1.22.10-eks-d-1-22-8-eks-a-11-amd64.img.gz
      image: public.ecr.aws/eks-anywhere/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
      name: stream-image
      seconds: 35
      startedAt: "2022-06-27T20:37:39Z"
      status: STATE_SUCCESS
...

You can see that the first action in the workflow is to stream (stream-image) the operating system to the destination disk (DEST_DISK) on the machine. In this example, the Bottlerocket operating system that will be copied to disk (/dev/sda) is being served from the location specified by IMG_URL. The action was successful (STATE_SUCCESS) and it took 35 seconds.

Each action and its status is shown in this output for the whole workflow. To see details of the default actions for each supported operating system, see the Ubuntu TinkerbellTemplateConfig example and Bottlerocket TinkerbellTemplateConfig example.

In general, the actions include:

  • Streaming the operating system image to disk on each machine.
  • Configuring the network interfaces on each machine.
  • Setting up the cloud-init or similar service to add users and otherwise configure the system.
  • Identifying the data source to add to the system.
  • Setting the kernel to pivot to the installed system (using kexec) or having the system reboot to bring up the installed system from disk.

If all goes well, you will see all actions set to STATE_SUCCESS, except for the kexec-image action. That should show as STATE_RUNNING for as long as the machine is running.

You can review the CAPT logs to see provisioning activity. For example, at the start of a new provisioning event, you would see something like the following:

kubectl logs -n capt-system capt-controller-manager-9f8b95b-frbq | less
..."Created BMCJob to get hardware ready for provisioning"...

You can follow this output to see the machine as it goes through the provisioning process.

After the node is initialized, completes all the Tinkerbell actions, and is booted into the installed operating system (Ubuntu or Bottlerocket), the new system starts cloud-init to do further configuration. At this point, the system will reach out to the Tinkerbell Hegel service to get its metadata.

If something goes wrong, viewing Hegel files can help you understand why a stuck system that has booted into Ubuntu or Bottlerocket has not joined the cluster yet. To see the Hegel logs, get the internal IP address for one of the new nodes. Then check for the names of Hegel logs and display the contents of one of those logs, searching for the IP address of the node:

kubectl get nodes -o wide
NAME        STATUS   ROLES                 AGE    VERSION               INTERNAL-IP    ...
eksa-da04   Ready    control-plane,master  9m5s   v1.22.10-eks-7dc61e8  10.80.30.23
kubectl get pods -n eksa-system | grep hegel
hegel-n7ngs
kubectl logs -n eksa-system hegel-n7ngs
..."Retrieved IP peer IP..."userIP":"10.80.30.23...

If the log shows you are getting requests from the node, the problem is not a cloud-init issue.

After the first machine successfully completes the workflow, each other machine repeats the same process until the initial set of machines is all up and running.

Tinkerbell moves to target cluster

Once the initial set of machines is up and the EKS Anywhere cluster is running, all the Tinkerbell services and components (including Boots) are moved to the new target cluster and run as pods on that cluster. Those services are deleted on the kind cluster on the Admin machine.

Reviewing the status

At this point, you can change your kubectl credentials to point at the new target cluster to get information about Tinkerbell services on the new cluster. For example:

export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig

First check that the Tinkerbell pods are all running by listing pods from the eksa-system namespace:

kubectl get pods -n eksa-system
NAME                                        READY   STATUS    RESTARTS   AGE
boots-5dc66b5d4-klhmj                       1/1     Running   0          3d
hegel-sbchp                                 1/1     Running   0          3d
rufio-controller-manager-5dcc568c79-9kllz   1/1     Running   0          3d
tink-controller-manager-54dc786db6-tm2c5    1/1     Running   0          3d
tink-server-5c494445bc-986sl                1/1     Running   0          3d

Next, check the list of Tinkerbell machines. If all of the machines were provisioned successfully, you should see true under the READY column for each one.

kubectl get tinkerbellmachine -A
NAMESPACE    NAME                                                   CLUSTER    STATE  READY  INSTANCEID                          MACHINE
eksa-system  mycluster-control-plane-template-1656099863422-pqq2q   mycluster         true   tinkerbell://eksa-system/eksa-da04  mycluster-72p72

You can also check the machines themselves. Watch the PHASE change from Provisioning to Provisioned to Running. The Running phase indicates that the machine is now running as a node on the new cluster:

kubectl get machines -n eksa-system
NAME              CLUSTER    NODENAME    PROVIDERID                         PHASE    AGE  VERSION
mycluster-72p72   mycluster  eksa-da04   tinkerbell://eksa-system/eksa-da04 Running  7m25s   v1.22.10-eks-1-22-8

Once you have confirmed that all your machines are successfully running as nodes on the target cluster, there is not much for Tinkerbell to do. It stays around to continue running the DHCP service and to be available to add more machines to the cluster.

4.7.3 - Requirements for EKS Anywhere on Bare Metal

Bare Metal provider requirements for EKS Anywhere

To run EKS Anywhere on Bare Metal, you need to meet the hardware and networking requirements described below.

Administrative machine

Set up an Administrative machine as described in Install EKS Anywhere.

Compute server requirements

The minimum number of physical machines needed to run EKS Anywhere on bare metal is 1. To configure EKS Anywhere to run on a single server, set controlPlaneConfiguration.count to 1, and omit workerNodeGroupConfigurations from your cluster configuration.

The recommended number of physical machines for production is at least:

  • Control plane physical machines: 3
  • Worker physical machines: 2

The compute hardware you need for your Bare Metal cluster must meet the following capacity requirements:

  • vCPU: 2
  • Memory: 8GB RAM
  • Storage: 25GB

Operating system requirements

If you intend on using a non-Bottlerocket OS you must build it using image-builder. See the OS Management Artifacts page for help building the OS.

Upgrade requirements

If you are running a standalone cluster with only one control plane node, you will need at least one additional, temporary machine for each control plane node grouping. For cluster with multiple control plane nodes, you can perform a rolling upgrade with or without an extra temporary machine. For worker node upgrades, you can perform a rolling upgrade with or without an extra temporary machine.

When upgrading without an extra machine, keep in mind that your control plane and your workload must be able to tolerate node unavailability. When upgrading with extra machine(s), you will need additional temporary machine(s) for each control plane and worker node grouping. Refer to Upgrade Bare Metal Cluster and Advanced configuration for upgrade rollout strategy .

NOTE: For single-node clusters that require an additional temporary machine for upgrading, if you don’t want to set up the extra hardware, you may recreate the cluster for upgrading and handle data recovery manually.

Network requirements

Each machine should include the following features:

  • Network Interface Cards: at least one NIC is required. It must be capable of network booting.

  • BMC integration (recommended): an IPMI or Redfish implementation (such a Dell iDRAC, RedFish-compatible, legacy or HP iLO) on the computer’s motherboard or on a separate expansion card. This feature is used to allow remote management of the machine, such as turning the machine on and off.

NOTE: BMC integration is not required for an EKS Anywhere cluster. However, without BMC integration, upgrades are not supported and you will have to physically turn machines off and on when appropriate.

Here are other network requirements:

  • All EKS Anywhere machines, including the Admin, control plane and worker machines, must be on the same layer 2 network and have network connectivity to the BMC (IPMI, Redfish, and so on).

  • You must be able to run DHCP on the control plane/worker machine network.

NOTE: If you have another DHCP service running on the network, you need to prevent it from interfering with the EKS Anywhere DHCP service. You can do that by configuring the other DHCP service to explicitly block all MAC addresses and exclude all IP addresses that you plan to use with your EKS Anywhere clusters.

  • If you have not followed the steps for airgapped environments , then the administrative machine and the target workload environment need network access (TCP/443) to:

    • public.ecr.aws

    • anywhere-assets.eks.amazonaws.com: to download the EKS Anywhere binaries, manifests and OVAs

    • distro.eks.amazonaws.com: to download EKS Distro binaries and manifests

    • d2glxqk2uabbnd.cloudfront.net: for EKS Anywhere and EKS Distro ECR container images

  • Two IP addresses routable from the cluster, but excluded from DHCP offering. One IP address is to be used as the Control Plane Endpoint IP. The other is for the Tinkerbell IP address on the target cluster. Below are some suggestions to ensure that these IP addresses are never handed out by your DHCP server. You may need to contact your network engineer to manage these addresses.

    • Pick IP addresses reachable from the cluster subnet that are excluded from the DHCP range or

    • Create an IP reservation for these addresses on your DHCP server. This is usually accomplished by adding a dummy mapping of this IP address to a non-existent mac address.

NOTE: When you set up your cluster configuration YAML file, the endpoint and Tinkerbell addresses are set in the controlPlaneConfiguration.endpoint.host and tinkerbellIP fields, respectively.

Validated hardware

Through extensive testing in a variety of on-premises environments, we have validated Amazon EKS Anywhere on bare metal works without modification on most modern hardware that meets the above requirements. Compatibility is determined by the host operating system selected when Building Node Images . Installation may require you to Customize HookOS for EKS Anywhere on Bare Metal to add drivers, or modify configuration specific to your environment. Bottlerocket support for bare metal was deprecated with the EKS Anywhere v0.19 release.

4.7.4 - Preparing Bare Metal for EKS Anywhere

Set up a Bare Metal cluster to prepare it for EKS Anywhere

After gathering hardware described in Bare Metal Requirements , you need to prepare the hardware and create a CSV file describing that hardware.

Prepare hardware

To prepare your computer hardware for EKS Anywhere, you need to connect your computer hardware and do some configuration. Once the hardware is in place, you need to:

  • Obtain IP and MAC addresses for your machines' NICs.
  • Obtain IP addresses for your machines' BMC interfaces.
  • Obtain the gateway address for your network to reach the Internet.
  • Obtain the IP address for your DNS servers.
  • Make sure the following settings are in place:
    • UEFI is enabled on all target cluster machines, unless you are provisioning RHEL systems. Enable legacy BIOS on any RHEL machines.
    • Netboot (PXE or HTTP) boot is enabled for the NIC on each machine for which you provided the MAC address. This is the interface on which the operating system will be provisioned.
    • IPMI over LAN and/or Redfish is enabled on all BMC interfaces.
  • Go to the BMC settings for each machine and set the IP address (bmc_ip), username (bmc_username), and password (bmc_password) to use later in the CSV file.

Prepare hardware inventory

Create a CSV file to provide information about all physical machines that you are ready to add to your target Bare Metal cluster. This file will be used:

  • When you generate the hardware file to be included in the cluster creation process described in the Create Bare Metal production cluster Getting Started guide.
  • To provide information that is passed to each machine from the Tinkerbell DHCP server when the machine is initially network booted.

NOTE:While using kubectl, GitOps and Terraform for workload cluster creation, please make sure to refer this section.

The following is an example of an EKS Anywhere Bare Metal hardware CSV file:

hostname,bmc_ip,bmc_username,bmc_password,mac,ip_address,netmask,gateway,nameservers,labels,disk
eksa-cp01,10.10.44.1,root,PrZ8W93i,CC:48:3A:00:00:01,10.10.50.2,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=cp,/dev/sda
eksa-cp02,10.10.44.2,root,Me9xQf93,CC:48:3A:00:00:02,10.10.50.3,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=cp,/dev/sda
eksa-cp03,10.10.44.3,root,Z8x2M6hl,CC:48:3A:00:00:03,10.10.50.4,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=cp,/dev/sda
eksa-wk01,10.10.44.4,root,B398xRTp,CC:48:3A:00:00:04,10.10.50.5,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=worker,/dev/sda
eksa-wk02,10.10.44.5,root,w7EenR94,CC:48:3A:00:00:05,10.10.50.6,255.255.254.0,10.10.50.1,8.8.8.8|8.8.4.4,type=worker,/dev/sda

The CSV file is a comma-separated list of values in a plain text file, holding information about the physical machines in the datacenter that are intended to be a part of the cluster creation process. Each line represents a physical machine (not a virtual machine).

The following sections describe each value.

hostname

The hostname assigned to the machine.

bmc_ip (optional)

The IP address assigned to the BMC interface on the machine.

bmc_username (optional)

The username assigned to the BMC interface on the machine.

bmc_password (optional)

The password associated with the bmc_username assigned to the BMC interface on the machine.

mac

The MAC address of the network interface card (NIC) that provides access to the host computer.

ip_address

The IP address providing access to the host computer.

netmask

The netmask associated with the ip_address value. In the example above, a /23 subnet mask is used, allowing you to use up to 510 IP addresses in that range.

gateway

IP address of the interface that provides access (the gateway) to the Internet.

nameservers

The IP address of the server that you want to provide DNS service to the cluster.

labels

The optional labels field can consist of a key/value pair to use in conjunction with the hardwareSelector field when you set up your Bare Metal configuration. The key/value pair is connected with an equal (=) sign.

For example, a TinkerbellMachineConfig with a hardwareSelector containing type: cp will match entries in the CSV containing type=cp in its label definition.

disk

The device name of the disk on which the operating system will be installed. For example, it could be /dev/sda for the first SCSI disk or /dev/nvme0n1 for the first NVME storage device.

4.7.5 - Create Bare Metal cluster

Create a cluster on Bare Metal

EKS Anywhere supports a Bare Metal provider for EKS Anywhere deployments. EKS Anywhere allows you to provision and manage Kubernetes clusters based on Amazon EKS software on your own infrastructure.

This document walks you through setting up EKS Anywhere on Bare Metal as a standalone, self-managed cluster or combined set of management/workload clusters. See Cluster topologies for details.

Note: Before you create your cluster, you have the option of validating the EKS Anywhere bundle manifest container images by following instructions in the Verify Cluster Images page.

Prerequisite checklist

EKS Anywhere needs:

Steps

The following steps are divided into two sections:

  • Create an initial cluster (used as a management or self-managed cluster)
  • Create zero or more workload clusters from the management cluster

Create an initial cluster

Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a self-managed cluster (for running workloads itself).

  1. Optional Configuration

    Set License Environment Variable

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    

    After you have created your eksa-mgmt-cluster.yaml and set your credential environment variables, you will be ready to create the cluster.

    Configure Curated Packages

    The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here . Cluster creation will succeed if authentication is not set up, but some warnings may be generated. Detailed package configurations can be found here.

    If you are going to use packages, set up authentication. These credentials should have limited capabilities:

    export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
    export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
    export EKSA_AWS_REGION="us-west-2"  
    
  2. Set an environment variable for your cluster name:

    export CLUSTER_NAME=mgmt
    
  3. Generate a cluster config file for your Bare Metal provider (using tinkerbell as the provider type).

    eksctl anywhere generate clusterconfig $CLUSTER_NAME --provider tinkerbell > eksa-mgmt-cluster.yaml
    
  4. Modify the cluster config (eksa-mgmt-cluster.yaml) by referring to the Bare Metal configuration reference documentation.

  5. Create the cluster, using the hardware.csv file you made in Bare Metal preparation .

    For a regular cluster create (with internet access), type the following:

    eksctl anywhere create cluster \
       --hardware-csv hardware.csv \
       -f eksa-mgmt-cluster.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
    

    For an airgapped cluster create, follow Preparation for airgapped deployments instructions, then type the following:

    eksctl anywhere create cluster
       --hardware-csv hardware.csv \
       -f $CLUSTER_NAME.yaml \
       --bundles-override ./eks-anywhere-downloads/bundle-release.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
    
  6. Once the cluster is created you can use it with the generated KUBECONFIG file in your local directory:

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    
  7. Check the cluster nodes:

    To check that the cluster completed, list the machines to see the control plane and worker nodes:

    kubectl get machines -A
    

    Example command output:

    NAMESPACE     NAME                        CLUSTER   NODENAME        PROVIDERID                              PHASE     AGE   VERSION
    eksa-system   mgmt-47zj8                  mgmt      eksa-node01     tinkerbell://eksa-system/eksa-node01    Running   1h    v1.23.7-eks-1-23-4
    eksa-system   mgmt-md-0-7f79df46f-wlp7w   mgmt      eksa-node02     tinkerbell://eksa-system/eksa-node02    Running   1h    v1.23.7-eks-1-23-4
    ...
    
  8. Check the cluster:

    You can now use the cluster as you would any Kubernetes cluster. To try it out, run the test application with:

    export CLUSTER_NAME=mgmt
    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
    

    Verify the test application in Deploy test workload.

Create separate workload clusters

Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.

  1. Set License Environment Variable (Optional)

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    
  2. Generate a workload cluster config:

    CLUSTER_NAME=w01
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider tinkerbell > eksa-w01-cluster.yaml
    

    Refer to the initial config described earlier for the required and optional settings. Ensure workload cluster object names (Cluster, TinkerbellDatacenterConfig, TinkerbellMachineConfig, etc.) are distinct from management cluster object names. Keep the tinkerbellIP of workload cluster the same as tinkerbellIP of the management cluster.

  3. Be sure to set the managementCluster field to identify the name of the management cluster.

    For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: w01
    spec:
      managementCluster:
        name: mgmt
    
  4. Create a workload cluster

    To create a new workload cluster from your management cluster run this command, identifying:

    • The workload cluster YAML file
    • The initial cluster’s credentials (this causes the workload cluster to be managed from the management cluster)

    Create a workload cluster in one of the following ways:

    • eksctl CLI: To create a workload cluster with eksctl, run:

      eksctl anywhere create cluster \
          -f eksa-w01-cluster.yaml  \
          --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig \
          # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
          # --hardware-csv <hardware.csv> \ # uncomment to add more hardware
          # --bundles-override ./eks-anywhere-downloads/bundle-release.yaml \ # uncomment for airgapped install
      

      As noted earlier, adding the --kubeconfig option tells eksctl to use the management cluster identified by that kubeconfig file to create a different workload cluster.

    • kubectl CLI: The cluster lifecycle feature lets you use kubectl to talks to the Kubernetes API to create a workload cluster. To use kubectl, run:

      kubectl apply -f eksa-w01-cluster.yaml --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
      

      To check the state of a cluster managed with the cluster lifecyle feature, use kubectl to show the cluster object with its status.

      The status field on the cluster object field holds information about the current state of the cluster.

      kubectl get clusters w01 -o yaml
      

      The cluster has been fully upgraded once the status of the Ready condition is marked True. See the cluster status guide for more information.

    • GitOps: See Manage separate workload clusters with GitOps

    • Terraform: See Manage separate workload clusters with Terraform

      NOTE: For kubectl, GitOps and Terraform:

      • The baremetal controller does not support scaling upgrades and Kubernetes version upgrades in the same request.
      • While creating a new workload cluster if you need to add additional machines for the target workload cluster, run:
        eksctl anywhere generate hardware -z updated-hardware.csv > updated-hardware.yaml
        kubectl apply -f updated-hardware.yaml
        
      • For creating multiple workload clusters, it is essential that the hardware labels and selectors defined for a given workload cluster are unique to that workload cluster. For instance, for an EKS Anywhere cluster named eksa-workload1, the hardware that is assigned for this cluster should have labels that are only going to be used for this cluster like type=eksa-workload1-cp and type=eksa-workload1-worker. Another workload cluster named eksa-workload2 can have labels like type=eksa-workload2-cp and type=eksa-workload2-worker. Please note that even though labels can be arbitrary, they need to be unique for each workload cluster. Not specifying unique cluster labels can cause cluster creations to behave in unexpected ways which may lead to unsuccessful creations and unstable clusters. See the hardware selectors section for more information
  5. Check the workload cluster:

    You can now use the workload cluster as you would any Kubernetes cluster. Change your credentials to point to the new workload cluster (for example, mgmt-w01), then run the test application with:

    export CLUSTER_NAME=mgmt-w01
    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
    

    Verify the test application in the deploy test application section.

  6. Add more workload clusters:

    To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml), modifying resource names, and running the create cluster command again.

Next steps:

  • See the Cluster management section for more information on common operational tasks like deleting the cluster.

  • See the Package management section for more information on post-creation curated packages installation.

4.7.6 - Configure for Bare Metal

Full EKS Anywhere configuration reference for a Bare Metal cluster.

This is a generic template with detailed descriptions below for reference. The following additional optional configuration can also be included:

To generate your own cluster configuration, follow instructions from the Create Bare Metal cluster section and modify it using descriptions below. For information on how to add cluster configuration settings to this file for advanced node configuration, see Advanced Bare Metal cluster configuration .

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    cniConfig:
      cilium: {}
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneConfiguration:
    count: 1
    endpoint:
      host: "<Control Plane Endpoint IP>"
    machineGroupRef:
      kind: TinkerbellMachineConfig
      name: my-cluster-name-cp
  datacenterRef:
    kind: TinkerbellDatacenterConfig
    name: my-cluster-name
  kubernetesVersion: "1.28"
  managementCluster:
    name: my-cluster-name
  workerNodeGroupConfigurations:
  - count: 1
    machineGroupRef:
      kind: TinkerbellMachineConfig
      name: my-cluster-name
    name: md-0

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellDatacenterConfig
metadata:
  name: my-cluster-name
spec:
  tinkerbellIP: "<Tinkerbell IP>"

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
  name: my-cluster-name-cp
spec:
  hardwareSelector: {}
  osFamily: bottlerocket
  templateRef: {}
  users:
  - name: ec2-user
    sshAuthorizedKeys:
    - ssh-rsa AAAAB3NzaC1yc2... jwjones@833efcab1482.home.example.com

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
  name: my-cluster-name
spec:
  hardwareSelector: {}
  osFamily: bottlerocket
  templateRef:
    kind: TinkerbellTemplateConfig
    name: my-cluster-name
  users:
  - name: ec2-user
    sshAuthorizedKeys:
    - ssh-rsa AAAAB3NzaC1yc2... jwjones@833efcab1482.home.example.com

Cluster Fields

name (required)

Name of your cluster (my-cluster-name in this example).

clusterNetwork (required)

Network configuration.

clusterNetwork.cniConfig (required)

CNI plugin configuration. Supports cilium.

clusterNetwork.cniConfig.cilium.policyEnforcementMode (optional)

Optionally specify a policyEnforcementMode of default, always or never.

clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces (optional)

Optionally specify a network interface name or interface prefix used for masquerading. See EgressMasqueradeInterfaces option.

clusterNetwork.cniConfig.cilium.skipUpgrade (optional)

When true, skip Cilium maintenance during upgrades. Also see Use a custom CNI.

clusterNetwork.cniConfig.cilium.routingMode (optional)

Optionally specify the routing mode. Accepts default and direct. Also see RoutingMode option.

clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR (optional)

Optionally specify the CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR (optional)

Optionally specify the IPv6 CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.pods.cidrBlocks[0] (required)

The pod subnet specified in CIDR notation. Only 1 pod CIDR block is permitted. The CIDR block should not conflict with the host or service network ranges.

clusterNetwork.services.cidrBlocks[0] (required)

The service subnet specified in CIDR notation. Only 1 service CIDR block is permitted. This CIDR block should not conflict with the host or pod network ranges.

clusterNetwork.dns.resolvConf.path (optional)

File path to a file containing a custom DNS resolver configuration.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes. This number needs to be odd to maintain ETCD quorum.

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other machines.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing.

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with Tinkerbell-specific configuration for your nodes. See TinkerbellMachineConfig Fields below.

controlPlaneConfiguration.taints (optional)

A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint (For k8s versions prior to 1.24, node-role.kubernetes.io/master. For k8s versions 1.24+, node-role.kubernetes.io/control-plane). The default control plane components will tolerate the provided taints.

Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.

NOTE: The taints provided will be used instead of the default control plane taint. Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.

controlPlaneConfiguration.labels (optional)

A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing the existing nodes.

controlPlaneConfiguration.upgradeRolloutStrategy (optional)

Configuration parameters for upgrade strategy.

controlPlaneConfiguration.upgradeRolloutStrategy.type (optional)

Default: RollingUpdate

Type of rollout strategy. Supported values: RollingUpdate,InPlace.

NOTE: The upgrade rollout strategy type must be the same for all control plane and worker nodes.

controlPlaneConfiguration.upgradeRolloutStrategy.rollingUpdate (optional)

Configuration parameters for customizing rolling upgrade behavior.

NOTE: The rolling update parameters can only be configured if upgradeRolloutStrategy.type is RollingUpdate.

controlPlaneConfiguration.upgradeRolloutStrategy.rollingUpdate.maxSurge (optional)

Default: 1

This can not be 0 if maxUnavailable is 0.

The maximum number of machines that can be scheduled above the desired number of machines.

Example: When this is set to n, the new worker node group can be scaled up immediately by n when the rolling upgrade starts. Total number of machines in the cluster (old + new) never exceeds (desired number of machines + n). Once scale down happens and old machines are brought down, the new worker node group can be scaled up further ensuring that the total number of machines running at any time does not exceed the desired number of machines + n.

controlPlaneConfiguration.skipLoadBalancerDeployment (optional)

Optional field to skip deploying the control plane load balancer. Make sure your infrastructure can handle control plane load balancing when you set this field to true. In most cases, you should not set this field to true.

datacenterRef (required)

Refers to the Kubernetes object with Tinkerbell-specific configuration. See TinkerbellDatacenterConfig Fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

managementCluster (required)

Identifies the name of the management cluster. If your cluster spec is for a standalone or management cluster, this value is the same as the cluster name.

workerNodeGroupConfigurations (optional)

This takes in a list of node groups that you can define for your workers.

You can omit workerNodeGroupConfigurations when creating Bare Metal clusters. If you omit workerNodeGroupConfigurations, control plane nodes will not be tainted and all pods will run on the control plane nodes. This mechanism can be used to deploy Bare Metal clusters on a single server. You can also run multi-node Bare Metal clusters without workerNodeGroupConfigurations.

NOTE: Empty workerNodeGroupConfigurations is not supported when Kubernetes version <= 1.21.

workerNodeGroupConfigurations[*].count (optional)

Number of worker nodes. (default: 1) It will be ignored if the cluster autoscaler curated package is installed and autoscalingConfiguration is used to specify the desired range of replicas.

Refers to troubleshooting machine health check remediation not allowed and choose a sufficient number to allow machine health check remediation.

workerNodeGroupConfigurations[*].machineGroupRef (required)

Refers to the Kubernetes object with Tinkerbell-specific configuration for your nodes. See TinkerbellMachineConfig Fields below.

workerNodeGroupConfigurations[*].name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations[*].autoscalingConfiguration (optional)

Configuration parameters for Cluster Autoscaler.

NOTE: Autoscaling configuration is not supported when using the InPlace upgrade rollout strategy.

workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional)

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional)

Maximum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].taints (optional)

A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must not have NoSchedule or NoExecute taints applied to it.

workerNodeGroupConfigurations[*].labels (optional)

A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration.

workerNodeGroupConfigurations[*].kubernetesVersion (optional)

The Kubernetes version you want to use for this worker node group. Supported values : 1.28, 1.27, 1.26, 1.25, 1.24

Must be less than or equal to the cluster kubernetesVersion defined at the root level of the cluster spec. The worker node kubernetesVersion must be no more than two minor Kubernetes versions lower than the cluster control plane’s Kubernetes version. Removing workerNodeGroupConfiguration.kubernetesVersion will trigger an upgrade of the node group to the kubernetesVersion defined at the root level of the cluster spec.

workerNodeGroupConfigurations[*].upgradeRolloutStrategy (optional)

Configuration parameters for upgrade strategy.

workerNodeGroupConfigurations[*].upgradeRolloutStrategy.type (optional)

Default: RollingUpdate

Type of rollout strategy. Supported values: RollingUpdate,InPlace.

NOTE: The upgrade rollout strategy type must be the same for all control plane and worker nodes.

workerNodeGroupConfigurations[*].upgradeRolloutStrategy.rollingUpdate (optional)

Configuration parameters for customizing rolling upgrade behavior.

NOTE: The rolling update parameters can only be configured if upgradeRolloutStrategy.type is RollingUpdate.

workerNodeGroupConfigurations[*].upgradeRolloutStrategy.rollingUpdate.maxSurge (optional)

Default: 1

This can not be 0 if maxUnavailable is 0.

The maximum number of machines that can be scheduled above the desired number of machines.

Example: When this is set to n, the new worker node group can be scaled up immediately by n when the rolling upgrade starts. Total number of machines in the cluster (old + new) never exceeds (desired number of machines + n). Once scale down happens and old machines are brought down, the new worker node group can be scaled up further ensuring that the total number of machines running at any time does not exceed the desired number of machines + n.

workerNodeGroupConfigurations[*].upgradeRolloutStrategy.rollingUpdate.maxUnavailable (optional)

Default: 0

This can not be 0 if MaxSurge is 0.

The maximum number of machines that can be unavailable during the upgrade.

Example: When this is set to n, the old worker node group can be scaled down by n machines immediately when the rolling upgrade starts. Once new machines are ready, old worker node group can be scaled down further, followed by scaling up the new worker node group, ensuring that the total number of machines unavailable at all times during the upgrade never falls below n.

TinkerbellDatacenterConfig Fields

tinkerbellIP (required)

Required field to identify the IP address of the Tinkerbell service. This IP address must be a unique IP in the network range that does not conflict with other IPs. Once the Tinkerbell services move from the Admin machine to run on the target cluster, this IP address makes it possible for the stack to be used for future provisioning needs. When separate management and workload clusters are supported in Bare Metal, the IP address becomes a necessity.

osImageURL (optional)

Optional field to replace the default Bottlerocket operating system. EKS Anywhere can only auto-import Bottlerocket. In order to use Ubuntu or RHEL see building baremetal node images . This field is also useful if you want to provide a customized operating system image or simply host the standard image locally. To upgrade a node or group of nodes to a new operating system version (ie. RHEL 8.7 to RHEL 8.8), modify this field to point to the new operating system image URL and run upgrade cluster command . The osImageURL must contain the Cluster.Spec.KubernetesVersion or Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, the osImageURL name should include 1.24, 1_24, 1-24 or 124.

hookImagesURLPath (optional)

Optional field to replace the HookOS image. This field is useful if you want to provide a customized HookOS image or simply host the standard image locally. See Artifacts for details.

Example TinkerbellDatacenterConfig.spec

spec:
  tinkerbellIP: "192.168.0.10"                                          # Available, routable IP
  osImageURL: "http://my-web-server/ubuntu-v1.23.7-eks-a-12-amd64.gz"   # Full URL to the OS Image hosted locally
  hookImagesURLPath: "http://my-web-server/hook"                        # Path to the hook images. This path must contain vmlinuz-x86_64 and initramfs-x86_64

This is the folder structure for my-web-server:

my-web-server
├── hook
│   ├── initramfs-x86_64
│   └── vmlinuz-x86_64
└── ubuntu-v1.23.7-eks-a-12-amd64.gz

skipLoadBalancerDeployment (optional)

Optional field to skip deploying the default load balancer for Tinkerbell stack.

EKS Anywhere for Bare Metal uses kube-vip load balancer by default to expose the Tinkerbell stack externally. You can disable this feature by setting this field to true.

NOTE: If you skip load balancer deployment, you will have to ensure that the Tinkerbell stack is available at tinkerbellIP once the cluster creation is finished. One way to achieve this is by using the MetalLB package.

TinkerbellMachineConfig Fields

In the example, there are TinkerbellMachineConfig sections for control plane (my-cluster-name-cp) and worker (my-cluster-name) machine groups. The following fields identify information needed to configure the nodes in each of those groups.

NOTE: Currently, you can only have one machine group for all machines in the control plane, although you can have multiple machine groups for the workers.

hardwareSelector (optional)

Use fields under hardwareSelector to add key/value pair labels to match particular machines that you identified in the CSV file where you defined the machines in your cluster. Choose any label name you like. For example, if you had added the label node=cp-machine to the machines listed in your CSV file that you want to be control plane nodes, the following hardwareSelector field would cause those machines to be added to the control plane:

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
  name: my-cluster-name-cp
spec:
  hardwareSelector:
    node: "cp-machine"

osFamily (required)

Operating system on the machine. Permitted values: bottlerocket, ubuntu, redhat (Default: bottlerocket).

osImageURL (optional)

Optional field to replace the default Bottlerocket operating system. EKS Anywhere can only auto-import Bottlerocket. In order to use Ubuntu or RHEL see building baremetal node images . This field is also useful if you want to provide a customized operating system image or simply host the standard image locally. To upgrade a node or group of nodes to a new operating system version (ie. RHEL 8.7 to RHEL 8.8), modify this field to point to the new operating system image URL and run upgrade cluster command .

NOTE: If specified for a single TinkerbellMachineConfig, osImageURL has to be specified for all the TinkerbellMachineConfigs. osImageURL field cannot be specified both in the TinkerbellDatacenterConfig and TinkerbellMachineConfig objects.

templateRef (optional)

Identifies the template that defines the actions that will be applied to the TinkerbellMachineConfig. See TinkerbellTemplateConfig fields below. EKS Anywhere will generate default templates based on osFamily during the create command. You can override this default template by providing your own template here.

users (optional)

The name of the user you want to configure to access your virtual machines through SSH.

The default is ec2-user. Currently, only one user is supported.

users[0].sshAuthorizedKeys (optional)

The SSH public keys you want to configure to access your machines through SSH (as described below). Only 1 is supported at this time.

users[0].sshAuthorizedKeys[0] (optional)

This is the SSH public key that will be placed in authorized_keys on all EKS Anywhere cluster machines so you can SSH into them. The user will be what is defined under name above. For example:

ssh -i <private-key-file> <user>@<machine-IP>

The default is generating a key in your $(pwd)/<cluster-name> folder when not specifying a value.

hostOSConfig (optional)

Optional host OS configurations for the EKS Anywhere Kubernetes nodes. More information in the Host OS Configuration section.

Advanced Bare Metal cluster configuration

When you generate a Bare Metal cluster configuration, the TinkerbellTemplateConfig is kept internally and not shown in the generated configuration file. TinkerbellTemplateConfig settings define the actions done to install each node, such as get installation media, configure networking, add users, and otherwise configure the node.

Advanced users can override the default values set for TinkerbellTemplateConfig. They can also add their own Tinkerbell actions to make personalized modifications to EKS Anywhere nodes.

The following shows two TinkerbellTemplateConfig examples that you can add to your cluster configuration file to override the values that EKS Anywhere sets: one for Ubuntu and one for Bottlerocket. Most actions used differ for different operating systems.

NOTE: For the stream-image action, DEST_DISK points to the device representing the entire hard disk (for example, /dev/sda). For UEFI-enabled images, such as Ubuntu, write actions use DEST_DISK to point to the second partition (for example, /dev/sda2), with the first being the EFI partition. For the Bottlerocket image, which has 12 partitions, DEST_DISK is partition 12 (for example, /dev/sda12). Device names will be different for different disk types.

Ubuntu TinkerbellTemplateConfig example

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellTemplateConfig
metadata:
  name: my-cluster-name
spec:
  template:
    global_timeout: 6000
    id: ""
    name: my-cluster-name
    tasks:
    - actions:
      - environment:
          COMPRESSED: "true"
          DEST_DISK: /dev/sda
          IMG_URL: https://my-file-server/ubuntu-v1.23.7-eks-a-12-amd64.gz
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: stream-image
        timeout: 360
      - environment:
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/netplan/config.yaml
          STATIC_NETPLAN: true
          DIRMODE: "0755"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0644"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: write-netplan
        timeout: 90
      - environment:
          CONTENTS: |
            datasource:
              Ec2:
                metadata_urls: [<admin-machine-ip>, <tinkerbell-ip-from-cluster-config>]
                strict_id: false
            manage_etc_hosts: localhost
            warnings:
              dsid_missing_source: off            
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/cloud/cloud.cfg.d/10_tinkerbell.cfg
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0600"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: add-tink-cloud-init-config
        timeout: 90
      - environment:
          CONTENTS: |
            network:
              config: disabled            
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0600"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: disable-cloud-init-network-capabilities
        timeout: 90
      - environment:
          CONTENTS: |
                        datasource: Ec2
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/cloud/ds-identify.cfg
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0600"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: add-tink-cloud-init-ds-config
        timeout: 90
      - environment:
          BLOCK_DEVICE: /dev/sda2
          FS_TYPE: ext4
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/kexec:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: kexec-image
        pid: host
        timeout: 90
      name: my-cluster-name
      volumes:
      - /dev:/dev
      - /dev/console:/dev/console
      - /lib/firmware:/lib/firmware:ro
      worker: '{{.device_1}}'
    version: "0.1"

Bottlerocket TinkerbellTemplateConfig example

Pay special attention to the BOOTCONFIG_CONTENTS environment section below if you wish to set up console redirection for the kernel and systemd. If you are only using a direct attached monitor as your primary display device, no additional configuration is needed here. However, if you need all boot output to be shown via a server’s serial console for example, extra configuration should be provided inside BOOTCONFIG_CONTENTS.

An empty kernel {} key is provided below in the example; inside this key is where you will specify your console devices. You may specify multiple comma delimited console devices in quotes to a console key as such: console = "tty0", "ttyS0,115200n8". The order of the devices is significant; systemd will output to the last device specified. The console key belongs inside the kernel key like so:

kernel {
    console = "tty0", "ttyS0,115200n8"
}

The above example will send all kernel output to both consoles, and systemd output to ttyS0. Additional information about serial console setup can be found in the Linux kernel documentation .

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellTemplateConfig
metadata:
  name: my-cluster-name
spec:
  template:
    global_timeout: 6000
    id: ""
    name: my-cluster-name
    tasks:
    - actions:
      - environment:
          COMPRESSED: "true"
          DEST_DISK: /dev/sda
          IMG_URL: https://anywhere-assets.eks.amazonaws.com/releases/bundles/11/artifacts/raw/1-22/bottlerocket-v1.22.10-eks-d-1-22-8-eks-a-11-amd64.img.gz
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: stream-image
        timeout: 360
      - environment:
          # An example console declaration that will send all kernel output to both consoles, and systemd output to ttyS0.
          # kernel {
          #     console = "tty0", "ttyS0,115200n8"
          # }
          BOOTCONFIG_CONTENTS: |
                        kernel {}
          DEST_DISK: /dev/sda12
          DEST_PATH: /bootconfig.data
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0644"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: write-bootconfig
        timeout: 90
      - environment:
          CONTENTS: |
            # Version is required, it will change as we support
            # additional settings
            version = 1
            # "eno1" is the interface name
            # Users may turn on dhcp4 and dhcp6 via boolean
            [eno1]
            dhcp4 = true
            # Define this interface as the "primary" interface
            # for the system.  This IP is what kubelet will use
            # as the node IP.  If none of the interfaces has
            # "primary" set, we choose the first interface in
            # the file
            primary = true            
          DEST_DISK: /dev/sda12
          DEST_PATH: /net.toml
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0644"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: write-netconfig
        timeout: 90
      - environment:
          HEGEL_URLS: http://<hegel-ip>:50061
          DEST_DISK: /dev/sda12
          DEST_PATH: /user-data.toml
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0644"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: write-user-data
        timeout: 90
      - name: "reboot"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/reboot:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        timeout: 90
        volumes:
          - /worker:/worker
      name: my-cluster-name
      volumes:
      - /dev:/dev
      - /dev/console:/dev/console
      - /lib/firmware:/lib/firmware:ro
      worker: '{{.device_1}}'
    version: "0.1"

TinkerbellTemplateConfig Fields

The values in the TinkerbellTemplateConfig fields are created from the contents of the CSV file used to generate a configuration. The template contains actions that are performed on a Bare Metal machine when it first boots up to be provisioned. For advanced users, you can add these fields to your cluster configuration file if you have special needs to do so.

While there are fields that apply to all provisioned operating systems, actions are specific to each operating system. Examples below describe actions for Ubuntu and Bottlerocket operating systems.

template.global_timeout

Sets the timeout value for completing the configuration. Set to 6000 (100 minutes) by default.

template.id

Not set by default.

template.tasks

Within the TinkerbellTemplateConfig template under tasks is a set of actions. The following descriptions cover the actions shown in the example templates for Ubuntu and Bottlerocket:

template.tasks.actions.name.stream-image (Ubuntu and Bottlerocket)

The stream-image action streams the selected image to the machine you are provisioning. It identifies:

  • environment.COMPRESSED: When set to true, Tinkerbell expects IMG_URL to be a compressed image, which Tinkerbell will uncompress when it writes the contents to disk.
  • environment.DEST_DISK: The hard disk on which the operating system is deployed. The default is the first SCSI disk (/dev/sda), but can be changed for other disk types.
  • environment.IMG_URL: The operating system tarball (ubuntu or other) to stream to the machine you are configuring.
  • image: Container image needed to perform the steps needed by this action.
  • timeout: Sets the amount of time (in seconds) that Tinkerbell has to stream the image, uncompress it, and write it to disk before timing out. Consider increasing this limit from the default 600 to a higher limit if this action is timing out.

Ubuntu-specific actions

template.tasks.actions.name.write-netplan (Ubuntu)

The write-netplan action writes Ubuntu network configuration information to the machine (see Netplan ) for details. It identifies:

  • environment.CONTENTS.network.version: Identifies the network version.
  • environment.CONTENTS.network.renderer: Defines the service to manage networking. By default, the networkd systemd service is used.
  • environment.CONTENTS.network.ethernets: Network interface to external network (eno1, by default) and whether or not to use dhcp4 (true, by default).
  • environment.DEST_DISK: Destination block storage device partition where the operating system is copied. By default, /dev/sda2 is used (sda1 is the EFI partition).
  • environment.DEST_PATH: File where the networking configuration is written (/etc/netplan/config.yaml, by default).
  • environment.DIRMODE: Linux directory permissions bits to use when creating directories (0755, by default)
  • environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
  • environment.GID: The Linux group ID to set on file. Set to 0 (root group) by default.
  • environment.MODE: The Linux permission bits to set on file (0644, by default).
  • environment.UID: The Linux user ID to set on file. Set to 0 (root user) by default.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.add-tink-cloud-init-config (Ubuntu)

The add-tink-cloud-init-config action configures cloud-init features to further configure the operating system. See cloud-init Documentation for details. It identifies:

  • environment.CONTENTS.datasource: Identifies Ec2 (Ec2.metadata_urls) as the data source and sets Ec2.strict_id: false to prevent cloud-init from producing warnings about this datasource.
  • environment.CONTENTS.system_info: Creates the tink user and gives it administrative group privileges (wheel, adm) and passwordless sudo privileges, and set the default shell (/bin/bash).
  • environment.CONTENTS.manage_etc_hosts: Updates the system’s /etc/hosts file with the hostname. Set to localhost by default.
  • environment.CONTENTS.warnings: Sets dsid_missing_source to off.
  • environment.DEST_DISK: Destination block storage device partition where the operating system is located (/dev/sda2, by default).
  • environment.DEST_PATH: Location of the cloud-init configuration file on disk (/etc/cloud/cloud.cfg.d/10_tinkerbell.cfg, by default)
  • environment.DIRMODE: Linux directory permissions bits to use when creating directories (0700, by default)
  • environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
  • environment.GID: The Linux group ID to set on file. Set to 0 (root group) by default.
  • environment.MODE: The Linux permission bits to set on file (0600, by default).
  • environment.UID: The Linux user ID to set on file. Set to 0 (root user) by default.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.add-tink-cloud-init-ds-config (Ubuntu)

The add-tink-cloud-init-ds-config action configures cloud-init data store features. This identifies the location of your metadata source once the machine is up and running. It identifies:

  • environment.CONTENTS.datasource: Sets the datasource. Uses Ec2, by default.
  • environment.DEST_DISK: Destination block storage device partition where the operating system is located (/dev/sda2, by default).
  • environment.DEST_PATH: Location of the data store identity configuration file on disk (/etc/cloud/ds-identify.cfg, by default)
  • environment.DIRMODE: Linux directory permissions bits to use when creating directories (0700, by default)
  • environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
  • environment.GID: The Linux group ID to set on file. Set to 0 (root group) by default.
  • environment.MODE: The Linux permission bits to set on file (0600, by default).
  • environment.UID: The Linux user ID to set on file. Set to 0 (root user) by default.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.kexec-image (Ubuntu)

The kexec-image action performs provisioning activities on the machine, then allows kexec to pivot the kernel to use the system installed on disk. This action identifies:

  • environment.BLOCK_DEVICE: Disk partition on which the operating system is installed (/dev/sda2, by default)
  • environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
  • image: Container image used to perform the steps needed by this action.
  • pid: Process ID. Set to host, by default.
  • timeout: Time needed to complete the action, in seconds.
  • volumes: Identifies mount points that need to be remounted to point to locations in the installed system.

There are known issues related to drivers with some hardware that may make it necessary to replace the kexec-image action with a full reboot. If you require a full reboot, you can change the kexec-image setting as follows:

actions:
- name: "reboot"
  image: public.ecr.aws/l0g8r8j6/tinkerbell/hub/reboot-action:latest
  timeout: 90
  volumes:
  - /worker:/worker

Bottlerocket-specific actions

template.tasks.actions.write-bootconfig (Bottlerocket)

The write-bootconfig action identifies the location on the machine to put content needed to boot the system from disk.

  • environment.BOOTCONFIG_CONTENTS.kernel: Add kernel parameters that are passed to the kernel when the system boots.
  • environment.DEST_DISK: Identifies the block storage device that holds the boot partition.
  • environment.DEST_PATH: Identifies the file holding boot configuration data (/bootconfig.data in this example).
  • environment.DIRMODE: The Linux permissions assigned to the boot directory.
  • environment.FS_TYPE: The filesystem type associated with the boot partition.
  • environment.GID: The group ID associated with files and directories created on the boot partition.
  • environment.MODE: The Linux permissions assigned to files in the boot partition.
  • environment.UID: The user ID associated with files and directories created on the boot partition. UID 0 is the root user.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.write-netconfig (Bottlerocket)

The write-netconfig action configures networking for the system.

  • environment.CONTENTS: Add network values, including: version = 1 (version number), [eno1] (external network interface), dhcp4 = true (turns on dhcp4), and primary = true (identifies this interface as the primary interface used by kubelet).
  • environment.DEST_DISK: Identifies the block storage device that holds the network configuration information.
  • environment.DEST_PATH: Identifies the file holding network configuration data (/net.toml in this example).
  • environment.DIRMODE: The Linux permissions assigned to the directory holding network configuration settings.
  • environment.FS_TYPE: The filesystem type associated with the partition holding network configuration settings.
  • environment.GID: The group ID associated with files and directories created on the partition. GID 0 is the root group.
  • environment.MODE: The Linux permissions assigned to files in the partition.
  • environment.UID: The user ID associated with files and directories created on the partition. UID 0 is the root user.
  • image: Container image used to perform the steps needed by this action.

template.tasks.actions.write-user-data (Bottlerocket)

The write-user-data action configures the Tinkerbell Hegel service, which provides the metadata store for Tinkerbell.

  • environment.HEGEL_URLS: The IP address and port number of the Tinkerbell Hegel service.
  • environment.DEST_DISK: Identifies the block storage device that holds the network configuration information.
  • environment.DEST_PATH: Identifies the file holding network configuration data (/net.toml in this example).
  • environment.DIRMODE: The Linux permissions assigned to the directory holding network configuration settings.
  • environment.FS_TYPE: The filesystem type associated with the partition holding network configuration settings.
  • environment.GID: The group ID associated with files and directories created on the partition. GID 0 is the root group.
  • environment.MODE: The Linux permissions assigned to files in the partition.
  • environment.UID: The user ID associated with files and directories created on the partition. UID 0 is the root user.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.reboot (Bottlerocket)

The reboot action defines how the system restarts to bring up the installed system.

  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.
  • volumes: The volume (directory) to mount into the container from the installed system.

version

Matches the current version of the Tinkerbell template.

Custom Tinkerbell action examples

By creating your own custom Tinkerbell actions, you can add to or modify the installed operating system so those changes take effect when the installed system first starts (from a reboot or pivot). The following example shows how to add a .deb package (openssl) to an Ubuntu installation:

      - environment:
          BLOCK_DEVICE: /dev/sda1
          CHROOT: "y"
          CMD_LINE: apt -y update && apt -y install openssl
          DEFAULT_INTERPRETER: /bin/sh -c
          FS_TYPE: ext4
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/cexec:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: install-openssl
        timeout: 90

The following shows an example of adding a new user (tinkerbell) to an installed Ubuntu system:

      - environment:
          BLOCK_DEVICE: <block device path> # E.g. /dev/sda1
          FS_TYPE: ext4
          CHROOT: y
          DEFAULT_INTERPRETER: "/bin/sh -c"
          CMD_LINE: "useradd --password $(openssl passwd -1 tinkerbell) --shell /bin/bash --create-home --groups sudo tinkerbell"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/cexec:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: "create-user"
        timeout: 90

Look for more examples as they are added to the Tinkerbell examples page.

4.7.7 - Customize Bare Metal

Customizing EKS Anywhere on Bare Metal

4.7.7.1 - Customize HookOS for EKS Anywhere on Bare Metal

Customizing HookOS for EKS Anywhere on Bare Metal

To network boot bare metal machines in EKS Anywhere clusters, machines acquire a kernel and initial ramdisk that is referred to as HookOS. A default HookOS is provided when you create an EKS Anywhere cluster. However, there may be cases where you want and/or need to customize the default HookOS, such as to add drivers required to boot your particular type of hardware.

The following procedure describes how to customize and build HookOS. For more information on Tinkerbell’s HookOS Installation Environment, see the Tinkerbell Hook repo .

System requirements

  • >= 2G memory
  • >= 4 CPU cores # the more cores the better for kernel building.
  • >= 20G disk space

Dependencies

Be sure to install all the following dependencies.

  • jq
  • envsubst
  • pigz
  • docker
  • curl
  • bash >= 4.4
  • git
  • findutils
  1. Clone the Hook repo or your fork of that repo:

    git clone https://github.com/tinkerbell/hook.git
    cd hook/
    
  2. Run the Linux kernel menuconfig TUI and configuring the kernel as needed. Be sure to save the config before exiting. The result of this step will be a modified kernel configuration file (./kernel/configs/generic-6.6.y-x86_64).

    ./build.sh kernel-config hook-latest-lts-amd64
    
  3. Build the kernel container image. The result of this step will be a container image. Use docker images quay.io/tinkerbell/hook-kernel to see it.

    ./build.sh kernel hook-latest-lts-amd64
    
  4. Build the HookOS kernel and initramfs artifacts. The result of this step will be the kernel and initramfs. These files are located at ./out/hook/vmlinuz-latest-lts-x86_64 and ./out/hook/initramfs-latest-lts-x86_64 respectively.

    ./build.sh linuxkit hook-latest-lts-amd64 
    
  5. Rename the kernel and initramfs files to vmlinuz-x86_64 and initramfs-x86_64 respectively.

    mv ./out/hook/vmlinuz-latest-lts-x86_64 ./out/hook/vmlinuz-x86_64
    mv ./out/hook/initramfs-latest-lts-x86_64 ./out/hook/initramfs-x86_64
    
  6. To use the kernel (vmlinuz-x86_64) and initial ram disk (initramfs-x86_64) when you build your EKS Anywhere cluster, see the description of the hookImagesURLPath field in your cluster configuration file.

4.7.7.2 - DHCP options for EKS Anywhere

Using your existing DHCP service with EKS Anywhere Bare Metal

In order to facilitate network booting machines, EKS Anywhere bare metal runs its own DHCP server, Boots (a standalone service in the Tinkerbell stack). There can be numerous reasons why you may want to use an existing DHCP service instead of Boots: Security, compliance, access issues, existing layer 2 constraints, existing automation, and so on.

In environments where there is an existing DHCP service, this DHCP service can be configured to interoperate with EKS Anywhere. This document will cover how to make your existing DHCP service interoperate with EKS Anywhere bare metal. In this scenario EKS Anywhere will have no layer 2 DHCP responsibilities.

Note: Currently, Boots is responsible for more than just DHCP. So Boots can’t be entirely avoided in the provisioning process.

Additional Services in Boots

  • HTTP and TFTP servers for iPXE binaries
  • HTTP server for iPXE script
  • Syslog server (receiver)

Process

As a prerequisite, your existing DHCP must serve host/address/static reservations for all machines that EKS Anywhere bare metal will be provisioning. This means that the IPAM details you enter into your hardware.csv must be used to create host/address/static reservations in your existing DHCP service.

Now, you configure your existing DHCP service to provide the location of the iPXE binary and script. This is a two-step interaction between machines and the DHCP service and enables the provisioning process to start.

  • Step 1: The machine broadcasts a request to network boot. Your existing DHCP service then provides the machine with all IPAM info as well as the location of the Tinkerbell iPXE binary (ipxe.efi). The machine configures its network interface with the IPAM info then downloads the Tinkerbell iPXE binary from the location provided by the DHCP service and runs it.

  • Step 2: Now with the Tinkerbell iPXE binary loaded and running, iPXE again broadcasts a request to network boot. The DHCP service again provides all IPAM info as well as the location of the Tinkerbell iPXE script (auto.ipxe). iPXE configures its network interface using the IPAM info and then downloads the Tinkerbell iPXE script from the location provided by the DHCP service and runs it.

Note The auto.ipxe is an iPXE script that tells iPXE from where to download the HookOS kernel and initrd so that they can be loaded into memory.

The following diagram illustrates the process described above. Note that the diagram only describes the network booting parts of the DHCP interaction, not the exchange of IPAM info.

process

Configuration

Below you will find code snippets showing how to add the two-step process from above to an existing DHCP service. Each config checks if DHCP option 77 (user class option ) equals “Tinkerbell”. If it does match, then the Tinkerbell iPXE script (auto.ipxe) will be served. If option 77 does not match, then the iPXE binary (ipxe.efi) will be served.

DHCP option: next server

Most DHCP services define a next server option. This option generally corresponds to either DHCP option 66 or the DHCP header sname, reference. This option is used to tell a machine where to download the initial bootloader, reference.

Special consideration is required for the next server value when using EKS Anywhere to create your initial management cluster. This is because during this initial create phase a temporary bootstrap cluster is created and used to provision the management cluster.

The bootstrap cluster runs the Tinkerbell stack. When the management cluster is successfully created, the Tinkerbell stack is redeployed to the management cluster and the bootstrap cluster is deleted. This means that the IP address of the Tinkerbell stack will change.

As a temporary and one-time step, the IP address used by the existing DHCP service for next server will need to be the IP address of the temporary bootstrap cluster. This will be the IP of the Admin node or if you use the cli flag --tinkerbell-bootstrap-ip then that IP should be used for the next server in your existing DHCP service.

Once the management cluster is created, the IP address used by the existing DHCP service for next server will need to be updated to the tinkerbellIP. This IP is defined in your cluster spec at tinkerbellDatacenterConfig.spec.tinkerbellIP. The next server IP will not need to be updated again.

Note: The upgrade phase of a management cluster or the creation of any workload clusters will not require you to change the next server IP in the config of your existing DHCP service.

Code snippets

The following code snippets are generic examples of the config needed to enable the two-step process to an existing DHCP service. It does not cover the IPAM info that is also required.

dnsmasq

dnsmasq.conf

dhcp-match=tinkerbell, option:user-class, Tinkerbell
dhcp-boot=tag:!tinkerbell,ipxe.efi,none,192.168.2.112
dhcp-boot=tag:tinkerbell,http://192.168.2.112/auto.ipxe

Kea DHCP

kea.json

{
    "Dhcp4": {
        "client-classes": [
            {
                "name": "tinkerbell",
                "test": "substring(option[77].hex,0,10) == 'Tinkerbell'",
                "boot-file-name": "http://192.168.2.112/auto.ipxe"
            },
            {
                "name": "default",
                "test": "not(substring(option[77].hex,0,10) == 'Tinkerbell')",
                "boot-file-name": "ipxe.efi"
            }
        ],
        "subnet4": [
            {
                "next-server": "192.168.2.112"
            }
        ]
    }
}

ISC DHCP

dhcpd.conf

 if exists user-class and option user-class = "Tinkerbell" {
     filename "http://192.168.2.112/auto.ipxe";
 } else {
     filename "ipxe.efi";
 }
 next-server "192.168.1.112";

Microsoft DHCP server

Please follow the ipxe.org guide on how to configure Microsoft DHCP server.

4.8 - Create Snow cluster

Create an EKS Anywhere cluster on Snow

4.8.1 - Create Snow cluster

Create an EKS Anywhere cluster on AWS Snowball Edge

EKS Anywhere supports an AWS Snow provider for EKS Anywhere deployments.

This document walks you through setting up EKS Anywhere on Snow as a standalone, self-managed cluster or combined set of management/workload clusters. See Cluster topologies for details.

Note: Before you create your cluster, you have the option of validating the EKS Anywhere bundle manifest container images by following instructions in the Verify Cluster Images page.

Prerequisite checklist

EKS Anywhere on Snow needs:

Also, see the Ports and protocols page for information on ports that need to be accessible from control plane, worker, and Admin machines.

Steps

The following steps are divided into two sections:

  • Create an initial cluster (used as a management or standalone cluster)
  • Create zero or more workload clusters from the management cluster

Create an initial cluster

Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a standalone cluster (for running workloads itself).

  1. Optional Configuration

    Set License Environment Variable

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    

    After you have created your eksa-mgmt-cluster.yaml and set your credential environment variables, you will be ready to create the cluster.

    Configure Curated Packages

    The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here . Cluster creation will succeed if authentication is not set up, but some warnings may be genered. Detailed package configurations can be found here .

    If you are going to use packages, set up authentication. These credentials should have limited capabilities :

    export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
    export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
    export EKSA_AWS_REGION="us-west-2"  
    
  2. Set an environment variables for your cluster name

    export CLUSTER_NAME=mgmt
    
  3. Generate a cluster config file for your Snow provider

    eksctl anywhere generate clusterconfig $CLUSTER_NAME --provider snow > eksa-mgmt-cluster.yaml
    
  4. Optionally import images to private registry

    This optional step imports EKS Anywhere artifacts and release bundle to a local registry. This is required for air-gapped installation.

    eksctl anywhere import images \
       --input /usr/lib/eks-a/artifacts/artifacts.tar.gz \
       --bundles /usr/lib/eks-a/manifests/bundle-release.yaml \
       --registry $PRIVATE_REGISTRY_ENDPOINT \
       --insecure=true
    
  5. Modify the cluster config (eksa-mgmt-cluster.yaml) as follows:

    • Refer to the Snow configuration for information on configuring this cluster config for a Snow provider.
    • Add Optional configuration settings as needed.
  6. Set Credential Environment Variables

    Before you create the initial cluster, you will need to use the credentials and ca-bundles files that are in the Admin instance, and export these environment variables for your AWS Snowball device credentials. Make sure you use single quotes around the values so that your shell does not interpret the values:

    export EKSA_AWS_CREDENTIALS_FILE='/PATH/TO/CREDENTIALS/FILE'
    export EKSA_AWS_CA_BUNDLES_FILE='/PATH/TO/CABUNDLES/FILE'
    

    After you have created your eksa-mgmt-cluster.yaml and set your credential environment variables, you will be ready to create the cluster.

  7. Create cluster

    For a regular cluster create (with internet access), type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml
    

    For an airgapped cluster create, follow Preparation for airgapped deployments instructions, then type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml \
       --bundles-override /usr/lib/eks-a/manifests/bundle-release.yaml
    
  8. Once the cluster is created you can use it with the generated KUBECONFIG file in your local directory:

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    
  9. Check the cluster nodes:

    To check that the cluster completed, list the machines to see the control plane and worker nodes:

    kubectl get machines -A
    

    Example command output:

    NAMESPACE    NAME                        CLUSTER  NODENAME                    PROVIDERID                                       PHASE    AGE    VERSION
    eksa-system  mgmt-etcd-dsxb5             mgmt                                 aws-snow:///192.168.1.231/s.i-8b0b0631da3b8d9e4  Running  4m59s  
    eksa-system  mgmt-md-0-7b7c69cf94-99sll  mgmt     mgmt-md-0-1-58nng           aws-snow:///192.168.1.231/s.i-8ebf6b58a58e47531  Running  4m58s  v1.24.9-eks-1-24-7
    eksa-system  mgmt-srrt8                  mgmt     mgmt-control-plane-1-xs4t9  aws-snow:///192.168.1.231/s.i-8414c7fcabcf3d7c1  Running  4m58s  v1.24.9-eks-1-24-7
    ...    
    
  10. Check the cluster:

    You can now use the cluster as you would any Kubernetes cluster. To try it out, run the test application with:

    export CLUSTER_NAME=mgmt
    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
    

    Verify the test application in Deploy test workload .

Create separate workload clusters

Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.

  1. Set License Environment Variable (Optional)

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    
  2. Generate a workload cluster config:

    CLUSTER_NAME=w01
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider snow > eksa-w01-cluster.yaml
    

    Refer to the initial config described earlier for the required and optional settings.

    NOTE: Ensure workload cluster object names (Cluster, SnowDatacenterConfig, SnowMachineConfig, etc.) are distinct from management cluster object names.

  3. Be sure to set the managementCluster field to identify the name of the management cluster.

    For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: w01
    spec:
      managementCluster:
        name: mgmt
    
  4. Create a workload cluster in one of the following ways:

    • GitOps: See Manage separate workload clusters with GitOps

    • Terraform: See Manage separate workload clusters with Terraform

      NOTE: snowDatacenterConfig.spec.identityRef and a Snow bootstrap credentials secret need to be specified when provisioning a cluster through GitOps or Terraform, as EKS Anywhere Cluster Controller will not create a Snow bootstrap credentials secret like eksctl CLI does when field is empty.

      snowMachineConfig.spec.sshKeyName must be specified to SSH into your nodes when provisioning a cluster through GitOps or Terraform, as the EKS Anywhere Cluster Controller will not generate the keys like eksctl CLI does when the field is empty.

    • eksctl CLI: To create a workload cluster with eksctl, run:

      eksctl anywhere create cluster \
          -f eksa-w01-cluster.yaml  \
          --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
      

      As noted earlier, adding the --kubeconfig option tells eksctl to use the management cluster identified by that kubeconfig file to create a different workload cluster.

    • kubectl CLI: The cluster lifecycle feature lets you use kubectl, or other tools that that can talk to the Kubernetes API, to create a workload cluster. To use kubectl, run:

      kubectl apply -f eksa-w01-cluster.yaml
      

      To check the state of a cluster managed with the cluster lifecyle feature, use kubectl to show the cluster object with its status.

      The status field on the cluster object field holds information about the current state of the cluster.

      kubectl get clusters w01 -o yaml
      

      The cluster has been fully upgraded once the status of the Ready condition is marked True. See the cluster status guide for more information.

  5. Check the workload cluster:

    You can now use the workload cluster as you would any Kubernetes cluster.

    • If your workload cluster was created with eksctl, change your credentials to point to the new workload cluster (for example, w01), then run the test application with:

      export CLUSTER_NAME=w01
      export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
      kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
      
    • If your workload cluster was created with GitOps or Terraform, the kubeconfig for your new cluster is stored as a secret on the management cluster. You can get credentials and run the test application as follows:

      kubectl get secret -n eksa-system w01-kubeconfig -o jsonpath={.data.value}' | base64 —decode > w01.kubeconfig
      export KUBECONFIG=w01.kubeconfig
      kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
      

    Verify the test application in the deploy test application section.

  6. Add more workload clusters:

    To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml), modifying resource names, and running the create cluster command again.

Next steps:

  • See the Cluster management section for more information on common operational tasks like deleting the cluster.

  • See the Package management section for more information on post-creation curated packages installation.

4.8.2 - Configure for Snow

Full EKS Anywhere configuration reference for a AWS Snow cluster.

This is a generic template with detailed descriptions below for reference. The following additional optional configuration can also be included:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    cniConfig:
      cilium: {}
    pods:
      cidrBlocks:
      - 10.1.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneConfiguration:
    count: 3
    endpoint:
      host: ""
    machineGroupRef:
      kind: SnowMachineConfig
      name: my-cluster-machines
  datacenterRef:
    kind: SnowDatacenterConfig
    name: my-cluster-datacenter
  externalEtcdConfiguration:
    count: 3
    machineGroupRef:
      kind: SnowMachineConfig
      name: my-cluster-machines
  kubernetesVersion: "1.28"
  workerNodeGroupConfigurations:
  - count: 1
    machineGroupRef:
      kind: SnowMachineConfig
      name: my-cluster-machines
    name: md-0
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowDatacenterConfig
metadata:
  name: my-cluster-datacenter
spec: {}

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowMachineConfig
metadata:
  name: my-cluster-machines
spec:
  amiID: ""
  instanceType: sbe-c.large
  sshKeyName: ""
  osFamily: ubuntu
  devices:
  - ""
  containersVolume:
    size: 25
  network:
    directNetworkInterfaces:
    - index: 1
      primary: true
      ipPoolRef:
        kind: SnowIPPool
        name: ip-pool-1
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowIPPool
metadata:
  name: ip-pool-1
spec:
  pools:
  - ipStart: 192.168.1.2
    ipEnd: 192.168.1.14
    subnet: 192.168.1.0/24
    gateway: 192.168.1.1
  - ipStart: 192.168.1.55
    ipEnd: 192.168.1.250
    subnet: 192.168.1.0/24
    gateway: 192.168.1.1

Cluster Fields

name (required)

Name of your cluster my-cluster-name in this example

clusterNetwork (required)

Network configuration.

clusterNetwork.cniConfig (required)

CNI plugin configuration. Supports cilium.

clusterNetwork.cniConfig.cilium.policyEnforcementMode (optional)

Optionally specify a policyEnforcementMode of default, always or never.

clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces (optional)

Optionally specify a network interface name or interface prefix used for masquerading. See EgressMasqueradeInterfaces option.

clusterNetwork.cniConfig.cilium.skipUpgrade (optional)

When true, skip Cilium maintenance during upgrades. Also see Use a custom CNI.

clusterNetwork.cniConfig.cilium.routingMode (optional)

Optionally specify the routing mode. Accepts default and direct. Also see RoutingMode option.

clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR (optional)

Optionally specify the CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR (optional)

Optionally specify the IPv6 CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.pods.cidrBlocks[0] (required)

The pod subnet specified in CIDR notation. Only 1 pod CIDR block is permitted. The CIDR block should not conflict with the host or service network ranges.

clusterNetwork.services.cidrBlocks[0] (required)

The service subnet specified in CIDR notation. Only 1 service CIDR block is permitted. This CIDR block should not conflict with the host or pod network ranges.

clusterNetwork.dns.resolvConf.path (optional)

File path to a file containing a custom DNS resolver configuration.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with Snow specific configuration for your nodes. See SnowMachineConfig Fields below.

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other devices.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing.

controlPlaneConfiguration.taints (optional)

A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint. For k8s versions prior to 1.24, it replaces node-role.kubernetes.io/master. For k8s versions 1.24+, it replaces node-role.kubernetes.io/control-plane. The default control plane components will tolerate the provided taints.

Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.

NOTE: The taints provided will be used instead of the default control plane taint. Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.

controlPlaneConfiguration.labels (optional)

A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing the existing nodes.

workerNodeGroupConfigurations (required)

This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.

workerNodeGroupConfigurations[*].count (optional)

Number of worker nodes. (default: 1) It will be ignored if the cluster autoscaler curated package is installed and autoscalingConfiguration is used to specify the desired range of replicas.

Refers to troubleshooting machine health check remediation not allowed and choose a sufficient number to allow machine health check remediation.

workerNodeGroupConfigurations[*].machineGroupRef (required)

Refers to the Kubernetes object with Snow specific configuration for your nodes. See SnowMachineConfig Fields below.

workerNodeGroupConfigurations[*].name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional)

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional)

Maximum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].taints (optional)

A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must not have NoSchedule or NoExecute taints applied to it.

workerNodeGroupConfigurations[*].labels (optional)

A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration.

workerNodeGroupConfigurations[*].kubernetesVersion (optional)

The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

externalEtcdConfiguration.count (optional)

Number of etcd members.

externalEtcdConfiguration.machineGroupRef (optional)

Refers to the Kubernetes object with Snow specific configuration for your etcd members. See SnowMachineConfig Fields below.

datacenterRef (required)

Refers to the Kubernetes object with Snow environment specific configuration. See SnowDatacenterConfig Fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

SnowDatacenterConfig Fields

identityRef (required)

Refers to the Kubernetes secret object with Snow devices credentials used to reconcile the cluster.

SnowMachineConfig Fields

amiID (optional)

AMI ID from which to create the machine instance. Snow provider offers an AMI lookup logic which will look for a suitable AMI ID based on the Kubernetes version and osFamily if the field is empty.

instanceType (optional)

Type of the Snow EC2 machine instance. See Quotas for Compute Instances on a Snowball Edge Device for supported instance types on Snow (Default: sbe-c.large).

osFamily

Operating System on instance machines. Permitted value: ubuntu.

physicalNetworkConnector (optional)

Type of snow physical network connector to use for creating direct network interfaces. Permitted values: SFP_PLUS, QSFP, RJ45 (Default: SFP_PLUS).

sshKeyName (optional)

Name of the AWS Snow SSH key pair you want to configure to access your machine instances.

The default is eksa-default-{cluster-name}-{uuid}.

devices

A device IP list from which to bootstrap and provision machine instances.

network

Custom network setting for the machine instances. DHCP and static IP configurations are supported.

network.directNetworkInterfaces[0].index (optional)

Index number of a direct network interface (DNI) used to clarify the position in the list. Must be no smaller than 1 and no greater than 8.

network.directNetworkInterfaces[0].primary (optional)

Whether the DNI is primary or not. One and only one primary DNI is required in the directNetworkInterfaces list.

network.directNetworkInterfaces[0].vlanID (optional)

VLAN ID to use for the DNI.

network.directNetworkInterfaces[0].dhcp (optional)

Whether DHCP is to be used to assign IP for the DNI.

network.directNetworkInterfaces[0].ipPoolRef (optional)

Refers to a SnowIPPool object which provides a range of ip addresses. When specified, an IP address selected from the pool will be allocated to the DNI.

containersVolume (optional)

Configuration option for customizing containers data storage volume.

containersVolume.size (optional)

Size of the storage for containerd runtime in Gi.

The field is optional for Ubuntu and if specified, the size must be no smaller than 8 Gi.

containersVolume.deviceName (optional)

Containers volume device name.

containersVolume.type (optional)

Type of the containers volume. Permitted values: sbp1, sbg1. (Default: sbp1)

sbp1 stands for capacity-optimized HDD. sbg1 is performance-optimized SSD.

nonRootVolumes (optional)

Configuration options for the non root storage volumes.

nonRootVolumes[0].deviceName (optional)

Non root volume device name. Must be specified and cannot have prefix “/dev/sda” as it is reserved for root volume and containers volume.

nonRootVolumes[0].size (optional)

Size of the storage device for the non root volume. Must be no smaller than 8 Gi.

nonRootVolumes[0].type (optional)

Type of the non root volume. Permitted values: sbp1, sbg1. (Default: sbp1)

sbp1 stands for capacity-optimized HDD. sbg1 is performance-optimized SSD.

SnowIPPool Fields

pools[0].ipStart (optional)

Start address of an IP range.

pools[0].ipEnd (optional)

End address of an IP range.

pools[0].subnet (optional)

An IP subnet for determining whether an IP is within the subnet.

pools[0].gateway (optional)

Gateway of the subnet for routing purpose.

4.9 - Create CloudStack cluster

Create an EKS Anywhere cluster on Apache CloudStack

4.9.1 - Requirements for EKS Anywhere on CloudStack

CloudStack provider requirements for EKS Anywhere

To run EKS Anywhere, you will need:

Prepare Administrative machine

Set up an Administrative machine as described in Install EKS Anywhere .

Prepare a CloudStack environment

To prepare a CloudStack environment to run EKS Anywhere, you need the following:

  • A CloudStack 4.14 or later environment. CloudStack 4.16 is used for examples in these docs.

  • Capacity to deploy 6-10 VMs.

  • One shared network in CloudStack to use for the cluster. EKS Anywhere clusters need access to CloudStack through the network to enable self-managing and storage capabilities.

  • A Red Hat Enterprise Linux qcow2 image built using the image-builder tool as described in artifacts .

  • User credentials (CloudStack API key and Secret key) to create VMs and attach networks in CloudStack.

  • Prepare DHCP IP addresses pool

  • One IP address routable from the cluster but excluded from DHCP offering. This IP address is to be used as the Control Plane Endpoint IP. Below are some suggestions to ensure that this IP address is never handed out by your DHCP server. You may need to contact your network engineer.

    • Pick an IP address reachable from the cluster subnet which is excluded from DHCP range OR
    • Alter DHCP ranges to leave out an IP address(s) at the top and/or the bottom of the range OR
    • Create an IP reservation for this IP on your DHCP server. This is usually accomplished by adding a dummy mapping of this IP address to a non-existent mac address.

Each VM will require:

  • 2 vCPUs
  • 8GB RAM
  • 25GB Disk

The administrative machine and the target workload environment will need network access (TCP/443) to:

CloudStack information needed before creating the cluster

You need at least the following information before creating the cluster. See CloudStack configuration for a complete list of options and Preparing CloudStack for instructions on creating the assets.

  • Static IP Addresses: You will need one IP address for the management cluster control plane endpoint, and a separate one for the controlplane of each workload cluster you add.

Let’s say you are going to have the management cluster and two workload clusters. For those, you would need three IP addresses, one for each. All of those addresses will be configured the same way in the configuration file you will generate for each cluster.

A static IP address will be used for each control plane VM in your EKS Anywhere cluster. Choose IP addresses in your network range that do not conflict with other VMs and make sure they are excluded from your DHCP offering. An IP address will be the value of the property controlPlaneConfiguration.endpoint.host in the config file of the management cluster. A separate IP address must be assigned for each workload cluster.

  • CloudStack datacenter: You need the name of the CloudStack Datacenter plus the following for each Availability Zone (availabilityZones). Most items can be represented by name or ID:
    • Account (account): Account with permission to create a cluster (optional, admin by default).
    • Credentials (credentialsRef): Credentials provided in an ini file used to access the CloudStack API endpoint. See CloudStack Getting started for details.
    • Domain (domain): The CloudStack domain in which to deploy the cluster (optional, ROOT by default)
    • Management endpoint (managementApiEndpoint): Endpoint for a cloudstack client to make API calls to client.
    • Zone network (zone.network): Either name or ID of the network.
  • CloudStack machine configuration: For each set of machines (for example, you could configure separate set of machines for control plane, worker, and etcd nodes), obtain the following information. This must be predefined in the cloudStack instance and identified by name or ID:
    • Compute offering (computeOffering): Choose an existing compute offering (such as large-instance), reflecting the amount of resources to apply to each VM.
    • Operating system (template): Identifies the operating system image to use (such as rhel8-k8s-118).
    • Users (users.name): Identifies users and SSH keys needed to access the VMs.

4.9.2 - Preparing CloudStack for EKS Anywhere

Set up a CloudStack cluster to prepare it for EKS Anywhere

Before you can create an EKS Anywhere cluster in CloudStack, you must do some setup on your CloudStack environment. This document helps you get what you need to fulfill the prerequisites described in the Requirements and values you need for CloudStack configuration .

Set up a domain and user credentials

Either use the ROOT domain or create a new domain to deploy your EKS Anywhere cluster. One or more users are grouped under a domain. This example creates a user account for the domain with a Domain Administrator role. From the apachecloudstack console:

  1. Select Domains.

  2. Select Add Domain.

  3. Fill in the Name for the domain (eksa in this example) and select OK.

  4. Select Accounts -> Add Account, then fill in the form to add a user with DomainAdmin role, as shown in the following figure:

    Add a user account with the DomainAdmin role

  5. To generate API credentials for the user, select Accounts-> -> View Users -> and select the Generate Keys button.

  6. Select OK to confirm key generation. The API Key and Secret Key should appear as shown in the following figure:

    Generate API Key and Secret Key

  7. Copy the API Key and Secret Key to a credentials file to use when you generate your cluster. For example:

    [Global]
    api-url = http://10.0.0.2:8080/client/api
    api-key = OI7pm0xrPMYjLlMfqrEEj...
    secret-key = tPsgAECJwTHzbU4wMH...
    

Import template

You need to build at least one operating system image and import it as a template to use for your cluster nodes. Currently, only Red Hat Enterprise Linux 8 images are supported. To build a RHEL-based image to use with EKS Anywhere, see Build node images .

  1. Make your image accessible from you local machine or from a URL that is accessible to your CloudStack setup.

  2. Select Images -> Templates, then select either Register Template from URL or Select local Template. The following figure lets you register a template from URL:

    Adding a RHEL-based EKS Anywhere image template

    This example imports a RHEL image (QCOW2), identifies the zone from which it will be available, uses KVM as the hypervisor, uses the osdefault Root disk controller, and identifies the OS Type as Red Hat Enterprise Linux 8.0. Select OK to save the template.

  3. Note the template name and zone so you can use it later when you deploy your cluster.

Create CloudStack configurations

Take a look at the following CloudStack configuration settings before creating your EKS Anywhere cluster. You will need to identify many of these assets when you create you cluster specification:

DatacenterConfig information

Here is how to get information to go into the CloudStackDatacenterConfig section of the CloudStack cluster configuration file:

  • Domain: Select Domains, then select your domain name from under the ROOT domain. Select View Users, not the user with the Domain Admin role, and consider setting limits to what each user can consume from the Resources and Configure Limits tabs.

  • Zones: Select Infrastructure -> Zones. Find a Zone where you can deploy your cluster or create a new one.

    Select from available Zones

  • Network: Select Network -> Guest networks. Choose a network to use for your cluster or create a new one.

Here is what some of that information would look like in a cluster configuration:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackDatacenterConfig
metadata:
  name: my-cluster-name-datacenter
spec:
  availabilityZones:
  - account: admin
    credentialsRef: global
    domain: eksa
    managementApiEndpoint: ""
    name: az-1
    zone:
      name: Zone2
      network:
        name: "SharedNet2"

MachineConfig information

Here is how to get information to go into CloudStackMachineConfig sections of the CloudStack cluster configuration file:

  • computeOffering: Select Service Offerings -> Compute Offerings to see a list of available combinations of CPU cores, CPU, and memory to apply to your node instances. See the following figure for an example:

    Choose or add a compute offering to set node resources

  • template: Select Images -> Templates to see available operating system image templates.

  • diskOffering: Select Storage -> Volumes, the select Create Volume, if you want to create disk storage to attach to your nodes (optional). You can use this to store logs or other data you want saved outside of the nodes. When you later create the cluster configuration, you can identify things like where you want the device mounted, the type of file system, labels and other information.

  • AffinityGroupIds: Select Compute -> Affinity Groups, then select Add new affinity group (optional). By creating an affinity group, you can tell all VMs from a set of instances to either all run on different physical hosts (anti-affinity) or just run anywhere they can (affinity).

Here is what some of that information would look like in a cluster configuration:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackMachineConfig
metadata:
  name: my-cluster-name-cp
spec:
  computeOffering:
    name: "Medium Instance"
  template:
    name: "rhel8-kube-1.28-eksa"
  diskOffering:
    name: "Small"
    mountPath: "/data-small"
    device: "/dev/vdb"
    filesystem: "ext4"
    label: "data_disk"
  symlinks:
    /var/log/kubernetes: /data-small/var/log/kubernetes
  affinityGroupIds:
  - control-plane-anti-affinity

4.9.3 - Create CloudStack cluster

Create a cluster on CloudStack

EKS Anywhere supports a CloudStack provider for EKS Anywhere deployments. This document walks you through setting up EKS Anywhere on CloudStack in a way that:

  • Deploys an initial cluster on your CloudStack environment. That cluster can be used as a standalone cluster (to run workloads) or a management cluster (to create and manage other clusters)
  • Deploys zero or more workload clusters from the management cluster

If your initial cluster is a management cluster, it is intended to stay in place so you can use it later to modify, upgrade, and delete workload clusters. Using a management cluster makes it faster to provision and delete workload clusters. Also it lets you keep CloudStack credentials for a set of clusters in one place: on the management cluster. The alternative is to simply use your initial cluster to run workloads. See Cluster topologies for details.

Note: Before you create your cluster, you have the option of validating the EKS Anywhere bundle manifest container images by following instructions in the Verify Cluster Images page.

Prerequisite Checklist

EKS Anywhere needs to:

Also, see the Ports and protocols page for information on ports that need to be accessible from control plane, worker, and Admin machines.

Steps

The following steps are divided into two sections:

  • Create an initial cluster (used as a management or standalone cluster)
  • Create zero or more workload clusters from the management cluster

Create an initial cluster

Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a standalone cluster (for running workloads itself).

  1. Optional Configuration

    Set License Environment Variable

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    

    After you have created your eksa-mgmt-cluster.yaml and set your credential environment variables, you will be ready to create the cluster.

    Configure Curated Packages

    The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here . Cluster creation will succeed if authentication is not set up, but some warnings may be genered. Detailed package configurations can be found here .

    If you are going to use packages, set up authentication. These credentials should have limited capabilities :

    export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
    export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
    export EKSA_AWS_REGION="us-west-2"
    
  2. Generate an initial cluster config (named mgmt for this example):

    export CLUSTER_NAME=mgmt
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider cloudstack > eksa-mgmt-cluster.yaml
    
  3. Create credential file

    Create a credential file (for example, cloud-config) and add the credentials needed to access your CloudStack environment. The file should include:

    • api-key: Obtained from CloudStack
    • secret-key: Obtained from CloudStack
    • api-url: The URL to your CloudStack API endpoint

    For example:

    [Global]
    api-key     =  -Dk5uB0DE3aWng
    secret-key  =  -0DQLunsaJKxCEEHn44XxP80tv6v_RB0DiDtdgwJ
    api-url     =  http://172.16.0.1:8080/client/api
    
    

    You can have multiple credential entries. To match this example, you would enter global as the credentialsRef in the cluster config file for your CloudStack availability zone. You can configure multiple credentials for multiple availability zones.

  4. Modify the initial cluster config (eksa-mgmt-cluster.yaml) as follows:

    • Refer to Cloudstack configuration for information on configuring this cluster config for a CloudStack provider.
    • Add Optional configuration settings as needed.
    • Create at least two control plane nodes, three worker nodes, and three etcd nodes, to provide high availability and rolling upgrades.
  5. Set Environment Variables

    Convert the credential file into base64 and set the following environment variable to that value:

    export EKSA_CLOUDSTACK_B64ENCODED_SECRET=$(base64 -i cloud-config)
    
  6. Create cluster

    For a regular cluster create (with internet access), type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
    

    For an airgapped cluster create, follow Preparation for airgapped deployments instructions, then type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml \
       --bundles-override ./eks-anywhere-downloads/bundle-release.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
    
  7. Once the cluster is created you can use it with the generated KUBECONFIG file in your local directory:

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    
  8. Check the cluster nodes:

    To check that the cluster completed, list the machines to see the control plane, etcd, and worker nodes:

    kubectl get machines -A
    

    Example command output

    NAMESPACE   NAME                PROVIDERID           PHASE    VERSION
    eksa-system mgmt-b2xyz          cloudstack:/xxxxx    Running  v1.23.1-eks-1-21-5
    eksa-system mgmt-etcd-r9b42     cloudstack:/xxxxx    Running
    eksa-system mgmt-md-8-6xr-rnr   cloudstack:/xxxxx    Running  v1.23.1-eks-1-21-5
    ...
    

    The etcd machine doesn’t show the Kubernetes version because it doesn’t run the kubelet service.

  9. Check the initial cluster’s CRD:

    To ensure you are looking at the initial cluster, list the CRD to see that the name of its management cluster is itself:

    kubectl get clusters mgmt -o yaml
    

    Example command output

    ...
    kubernetesVersion: "1.28"
    managementCluster:
      name: mgmt
    workerNodeGroupConfigurations:
    ...
    

Create separate workload clusters

Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.

  1. Set License Environment Variable (Optional)

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    
  2. Generate a workload cluster config:

    CLUSTER_NAME=w01
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider cloudstack > eksa-w01-cluster.yaml
    
  3. Modify the workload cluster config (eksa-w01-cluster.yaml) as follows. Refer to the initial config described earlier for the required and optional settings. In particular:

    • Ensure workload cluster object names (Cluster, CloudDatacenterConfig, CloudStackMachineConfig, etc.) are distinct from management cluster object names.
  4. Be sure to set the managementCluster field to identify the name of the management cluster.

    For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: w01
    spec:
      managementCluster:
        name: mgmt
    
  5. Create a workload cluster in one of the following ways:

    • GitOps: See Manage separate workload clusters with GitOps

    • Terraform: See Manage separate workload clusters with Terraform

      NOTE: spec.users[0].sshAuthorizedKeys must be specified to SSH into your nodes when provisioning a cluster through GitOps or Terraform, as the EKS Anywhere Cluster Controller will not generate the keys like eksctl CLI does when the field is empty.

    • eksctl CLI: To create a workload cluster with eksctl, run:

      eksctl anywhere create cluster \
          -f eksa-w01-cluster.yaml  \
          # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
          --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
      

      As noted earlier, adding the --kubeconfig option tells eksctl to use the management cluster identified by that kubeconfig file to create a different workload cluster.

    • kubectl CLI: The cluster lifecycle feature lets you use kubectl, or other tools that that can talk to the Kubernetes API, to create a workload cluster. To use kubectl, run:

      kubectl apply -f eksa-w01-cluster.yaml
      

      To check the state of a cluster managed with the cluster lifecyle feature, use kubectl to show the cluster object with its status.

      The status field on the cluster object field holds information about the current state of the cluster.

      kubectl get clusters w01 -o yaml
      

      The cluster has been fully upgraded once the status of the Ready condition is marked True. See the cluster status guide for more information.

  6. To check the workload cluster, get the workload cluster credentials and run a test workload:

    • If your workload cluster was created with eksctl, change your credentials to point to the new workload cluster (for example, w01), then run the test application with:

      export CLUSTER_NAME=w01
      export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
      kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
      
    • If your workload cluster was created with GitOps or Terraform, the kubeconfig for your new cluster is stored as a secret on the management cluster. You can get credentials and run the test application as follows:

      kubectl get secret -n eksa-system w01-kubeconfig -o jsonpath={.data.value}' | base64 —decode > w01.kubeconfig
      export KUBECONFIG=w01.kubeconfig
      kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
      
  7. Add more workload clusters:

    To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml), modifying resource names, and running the create cluster command again.

Next steps

  • See the Cluster management section for more information on common operational tasks like scaling and deleting the cluster.

  • See the Package management section for more information on post-creation curated packages installation.

Optional configuration

Disable KubeVIP

The KubeVIP deployment used for load balancing Kube API Server requests can be disabled by setting an environment variable that will be interpreted by the eksctl anywhere create cluster command. Disabling the KubeVIP deployment is useful if you wish to use an external load balancer for load balancing Kube API Server requests. When disabling the KubeVIP load balancer you become responsible for hosting the Spec.ControlPlaneConfiguration.Endpoint.Host IP which must route requests to a Kube API Server instance of the cluster being provisioned.

export CLOUDSTACK_KUBE_VIP_DISABLED=true

4.9.4 - CloudStack configuration

Full EKS Anywhere configuration reference for a CloudStack cluster

This is a generic template with detailed descriptions below for reference. The following additional optional configuration can also be included:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    cniConfig:
      cilium: {}
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneConfiguration:
    count: 3
    endpoint:
      host: ""
    machineGroupRef:
      kind: CloudStackMachineConfig
      name: my-cluster-name-cp
    taints:
    - key: ""
      value: ""
      effect: ""
    labels:
      "<key1>": ""
      "<key2>": ""
  datacenterRef:
    kind: CloudStackDatacenterConfig
    name: my-cluster-name
  externalEtcdConfiguration:
    count: 3
    machineGroupRef:
      kind: CloudStackMachineConfig
      name: my-cluster-name-etcd
  kubernetesVersion: "1.28"
  managementCluster:
    name: my-cluster-name
  workerNodeGroupConfigurations:
  - count: 2
    machineGroupRef:
      kind: CloudStackMachineConfig
      name: my-cluster-name
    taints:
    - key: ""
      value: ""
      effect: ""
    labels:
      "<key1>": ""
      "<key2>": ""
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackDatacenterConfig
metadata:
  name: my-cluster-name-datacenter
spec:
  availabilityZones:
  - account: admin
    credentialsRef: global
    domain: domain1
    managementApiEndpoint: ""
    name: az-1
    zone:
      name: zone1
      network:
        name: "net1"
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackMachineConfig
metadata:
  name: my-cluster-name-cp
spec:
  computeOffering:
    name: "m4-large"
  users:
  - name: capc
    sshAuthorizedKeys:
    - ssh-rsa AAAA...
  template:
    name: "rhel8-k8s-118"
  diskOffering:
    name: "Small"
    mountPath: "/data-small"
    device: "/dev/vdb"
    filesystem: "ext4"
    label: "data_disk"
  symlinks:
    /var/log/kubernetes: /data-small/var/log/kubernetes
  affinityGroupIds:
  - control-plane-anti-affinity
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackMachineConfig
metadata:
  name: my-cluster-name
spec:
  computeOffering:
    name: "m4-large"
  users:
  - name: capc
    sshAuthorizedKeys:
    - ssh-rsa AAAA...
  template:
    name: "rhel8-k8s-118"
  diskOffering:
    name: "Small"
    mountPath: "/data-small"
    device: "/dev/vdb"
    filesystem: "ext4"
    label: "data_disk"
  symlinks:
    /var/log/pods: /data-small/var/log/pods
    /var/log/containers: /data-small/var/log/containers
  affinityGroupIds:
  - worker-affinity
  userCustomDetails:
    foo: bar
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackMachineConfig
metadata:
  name: my-cluster-name-etcd
spec:
  computeOffering: {}
    name: "m4-large"
  users:
  - name: "capc"
    sshAuthorizedKeys:
    - "ssh-rsa AAAAB3N...
  template:
    name: "rhel8-k8s-118"
  diskOffering:
    name: "Small"
    mountPath: "/data-small"
    device: "/dev/vdb"
    filesystem: "ext4"
    label: "data_disk"
  symlinks:
    /var/lib: /data-small/var/lib
  affinityGroupIds:
  - etcd-affinity
---

Cluster Fields

name (required)

Name of your cluster my-cluster-name in this example

clusterNetwork (required)

Network configuration.

clusterNetwork.cniConfig (required)

CNI plugin configuration. Supports cilium.

clusterNetwork.cniConfig.cilium.policyEnforcementMode (optional)

Optionally specify a policyEnforcementMode of default, always or never.

clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces (optional)

Optionally specify a network interface name or interface prefix used for masquerading. See EgressMasqueradeInterfaces option.

clusterNetwork.cniConfig.cilium.skipUpgrade (optional)

When true, skip Cilium maintenance during upgrades. Also see Use a custom CNI.

clusterNetwork.cniConfig.cilium.routingMode (optional)

Optionally specify the routing mode. Accepts default and direct. Also see RoutingMode option.

clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR (optional)

Optionally specify the CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR (optional)

Optionally specify the IPv6 CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.pods.cidrBlocks[0] (required)

The pod subnet specified in CIDR notation. Only 1 pod CIDR block is permitted. The CIDR block should not conflict with the host or service network ranges.

clusterNetwork.services.cidrBlocks[0] (required)

The service subnet specified in CIDR notation. Only 1 service CIDR block is permitted. This CIDR block should not conflict with the host or pod network ranges.

clusterNetwork.dns.resolvConf.path (optional)

File path to a file containing a custom DNS resolver configuration.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other VMs.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster creation process are here

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with CloudStack specific configuration for your nodes. See CloudStackMachineConfig Fields below.

controlPlaneConfiguration.taints (optional)

A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint, node-role.kubernetes.io/master. The default control plane components will tolerate the provided taints.

Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.

NOTE: The taints provided will be used instead of the default control plane taint node-role.kubernetes.io/master. Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.

controlPlaneConfiguration.labels (optional)

A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that EKS Anywhere will add by default.

A special label value is supported by the CAPC provider:

    labels:
      cluster.x-k8s.io/failure-domain: ds.meta_data.failuredomain

The ds.meta_data.failuredomain value will be replaced with a failuredomain name where the node is deployed, such as az-1.

Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing the existing nodes.

datacenterRef (required)

Refers to the Kubernetes object with CloudStack environment specific configuration. See CloudStackDatacenterConfig Fields below.

externalEtcdConfiguration.count (optional)

Number of etcd members

externalEtcdConfiguration.machineGroupRef (optional)

Refers to the Kubernetes object with CloudStack specific configuration for your etcd members. See CloudStackMachineConfig Fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

managementCluster (required)

Identifies the name of the management cluster. If this is a standalone cluster or if it were serving as the management cluster for other workload clusters, this will be the same as the cluster name.

workerNodeGroupConfigurations (required)

This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.

workerNodeGroupConfigurations[*].count (optional)

Number of worker nodes. (default: 1) It will be ignored if the cluster autoscaler curated package is installed and autoscalingConfiguration is used to specify the desired range of replicas.

Refers to troubleshooting machine health check remediation not allowed and choose a sufficient number to allow machine health check remediation.

workerNodeGroupConfigurations[*].machineGroupRef (required)

Refers to the Kubernetes object with CloudStack specific configuration for your nodes. See CloudStackMachineConfig Fields below.

workerNodeGroupConfigurations[*].name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional)

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional)

Maximum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].taints (optional)

A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must not have NoSchedule or NoExecute taints applied to it.

workerNodeGroupConfigurations[*].labels (optional)

A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default. A special label value is supported by the CAPC provider:

    labels:
      cluster.x-k8s.io/failure-domain: ds.meta_data.failuredomain

The ds.meta_data.failuredomain value will be replaced with a failuredomain name where the node is deployed, such as az-1.

Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration.

workerNodeGroupConfigurations[*].kubernetesVersion (optional)

The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

CloudStackDatacenterConfig

availabilityZones.account (optional)

Account used to access CloudStack. As long as you pass valid credentials, through availabilityZones.credentialsRef, this value is not required.

availabilityZones.credentialsRef (required)

If you passed credentials through the environment variable EKSA_CLOUDSTACK_B64ENCODED_SECRET noted in Create CloudStack production cluster , you can identify those credentials here. For that example, you would use the profile name global. You can instead use a previously created secret on the Kubernetes cluster in the eksa-system namespace.

availabilityZones.domain (optional)

CloudStack domain to deploy the cluster. The default is ROOT.

availabilityZones.managementApiEndpoint (required)

Location of the CloudStack API management endpoint. For example, http://10.11.0.2:8080/client/api.

availabilityZones.{id,name} (required)

Name or ID of the CloudStack zone on which to deploy the cluster.

availabilityZones.zone.network.{id,name} (required)

CloudStack network name or ID to use with the cluster.

CloudStackMachineConfig

In the example above, there are separate CloudStackMachineConfig sections for the control plane (my-cluster-name-cp), worker (my-cluster-name) and etcd (my-cluster-name-etcd) nodes.

computeOffering.{id,name} (required)

Name or ID of the CloudStack compute instance.

users[0].name (optional)

The name of the user you want to configure to access your virtual machines through ssh. You can add as many users object as you want.

The default is capc.

users[0].sshAuthorizedKeys (optional)

The SSH public keys you want to configure to access your virtual machines through ssh (as described below). Only 1 is supported at this time.

users[0].sshAuthorizedKeys[0] (optional)

This is the SSH public key that will be placed in authorized_keys on all EKS Anywhere cluster VMs so you can ssh into them. The user will be what is defined under name above. For example:

ssh -i <private-key-file> <user>@<VM-IP>

The default is generating a key in your $(pwd)/<cluster-name> folder when not specifying a value.

template.{id,name} (required)

The VM template to use for your EKS Anywhere cluster. Currently, a VM based on RHEL 8.6 is required. This can be a name or ID. The template.name must contain the Cluster.Spec.KubernetesVersion or Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, the template.name field name should include 1.24, 1_24, 1-24 or 124. See the Artifacts page for instructions for building RHEL-based images.

diskOffering (optional)

Name representing a disk you want to mount into nodes for this CloudStackMachineConfig

diskOffering.mountPath (optional)

Mount point on which to mount the disk.

diskOffering.device (optional)

Device name of the disk partition to mount.

diskOffering.filesystem (optional)

File system type used to format the filesystem on the disk.

diskOffering.label (optional)

Label to apply to the disk partition.

Symbolic link of a directory or file you want to mount from the host filesystem to the mounted filesystem.

userCustomDetails (optional)

Add key/value pairs to nodes in a CloudStackMachineConfig. These can be used for things like identifying sets of nodes that you want to add to a security group that opens selected ports.

affinityGroupIDs (optional)

Group ID to attach to the set of host systems to indicate how affinity is done for services on those systems.

affinity (optional)

Allows you to set pro and anti affinity for the CloudStackMachineConfig. This can be used in a mutually exclusive fashion with the affinityGroupIDs field.

4.10 - Create Nutanix cluster

Create an EKS Anywhere cluster on Nutanix Cloud Infrastructure with AHV

4.10.1 - Overview

Overview of EKS Anywhere cluster creation on Nutanix

Creating a Nutanix cluster

The following diagram illustrates the cluster creation process for the Nutanix provider.

Start creating a Nutanix cluster

Start creating EKS Anywhere cluster

1. Generate a config file for Nutanix

Identify the provider (--provider nutanix) and the cluster name in the eksctl anywhere generate clusterconfig command and direct the output to a cluster config .yaml file.

2. Modify the config file

Modify the generated cluster config file to suit your situation. Details about this config file can be found on the Nutanix Config page.

3. Launch the cluster creation

After modifying the cluster configuration file, run the eksctl anywhere cluster create command, providing the cluster config. The verbosity can be increased to see more details on the cluster creation process (-v=9 provides maximum verbosity).

4. Create bootstrap cluster

The cluster creation process starts with creating a temporary Kubernetes bootstrap cluster on the Administrative machine.

First, the cluster creation process runs a series of commands to validate the Nutanix environment:

  • Checks that the Nutanix environment is available.
  • Authenticates the Nutanix provider to the Nutanix environment using the supplied Prism Central endpoint information and credentials.

For each of the NutanixMachineConfig objects, the following validations are performed:

  • Validates the provided resource configuration (CPU, memory, storage)
  • Validates the Nutanix subnet
  • Validates the Nutanix Prism Element cluster
  • Validates the image
  • (Optional) Validates the Nutanix project

If all validations pass, you will see this message:

✅ Nutanix Provider setup is valid

During bootstrap cluster creation, the following messages will be shown:

Creating new bootstrap cluster
Provider specific pre-capi-install-setup on bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific post-setup

Next, the Nutanix provider will create the machines in the Nutanix environment.

Continuing cluster creation

The following diagram illustrates the activities that occur next:

Continue creating EKS Anywhere cluster

1. CAPI management

Cluster API (CAPI) management will orchestrate the creation of the target cluster in the Nutanix environment.

Creating new workload cluster

2. Create the target cluster nodes

The control plane and worker nodes will be created and configured using the Nutanix provider.

3. Add Cilium networking

Add Cilium as the CNI plugin to use for networking between the cluster services and pods.

Installing networking on workload cluster

4. Moving cluster management to target cluster

CAPI components are installed on the target cluster. Next, cluster management is moved from the bootstrap cluster to the target cluster.

Creating EKS-A namespace
Installing cluster-api providers on workload cluster
Installing EKS-A secrets on workload cluster
Installing resources on management cluster
Moving cluster management from bootstrap to workload cluster
Installing EKS-A custom components (CRD and controller) on workload cluster
Installing EKS-D components on workload cluster
Creating EKS-A CRDs instances on workload cluster

4. Saving cluster configuration file

The cluster configuration file is saved.

Writing cluster config file

5. Delete bootstrap cluster

The bootstrap cluster is no longer needed and is deleted when the target cluster is up and running:

Delete EKS Anywhere bootstrap cluster

The target cluster can now be used as either:

  • A standalone cluster (to run workloads) or
  • A management cluster (to optionally create one or more workload clusters)

Creating workload clusters (optional)

The target cluster acts as a management cluster. One or more workload clusters can be managed by this management cluster as described in Create separate workload clusters :

  • Use eksctl to generate a cluster config file for the new workload cluster.
  • Modify the cluster config with a new cluster name and different Nutanix resources.
  • Use eksctl to create the new workload cluster from the new cluster config file.

4.10.2 - Requirements for EKS Anywhere on Nutanix Cloud Infrastructure

Preparing a Nutanix Cloud Infrastructure provider for EKS Anywhere

To run EKS Anywhere, you will need:

Prepare Administrative machine

Set up an Administrative machine as described in Install EKS Anywhere .

Prepare a Nutanix environment

To prepare a Nutanix environment to run EKS Anywhere, you need the following:

  • A Nutanix environment running AOS 5.20.4+ with AHV and Prism Central 2022.1+

  • Capacity to deploy 6-10 VMs

  • DHCP service or Nutanix IPAM running in your environment in the primary VM network for your workload cluster

  • Prepare DHCP IP addresses pool

  • A VM image imported into the Prism Image Service for the workload VMs

  • User credentials to create VMs and attach networks, etc

  • One IP address routable from cluster but excluded from DHCP/IPAM offering. This IP address is to be used as the Control Plane Endpoint IP

    Below are some suggestions to ensure that this IP address is never handed out by your DHCP server.

    You may need to contact your network engineer.

    • Pick an IP address reachable from cluster subnet which is excluded from DHCP range OR
    • Alter DHCP ranges to leave out an IP address(s) at the top and/or the bottom of the range OR
    • Create an IP reservation for this IP on your DHCP server. This is usually accomplished by adding a dummy mapping of this IP address to a non-existent mac address.
    • Block an IP address from the Nutanix IPAM managed network using aCLI

Each VM will require:

  • 2 vCPUs
  • 4GB RAM
  • 40GB Disk

The administrative machine and the target workload environment will need network access (TCP/443) to:

  • Prism Central endpoint (must be accessible to EKS Anywhere clusters)
  • Prism Element Data Services IP and CVM endpoints (for CSI storage connections)
  • public.ecr.aws (for pulling EKS Anywhere container images)
  • anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere binaries and manifests)
  • distro.eks.amazonaws.com (to download EKS Distro binaries and manifests)
  • d2glxqk2uabbnd.cloudfront.net (for EKS Anywhere and EKS Distro ECR container images)
  • api.ecr.us-west-2.amazonaws.com (for EKS Anywhere package authentication matching your region)
  • d5l0dvt14r5h8.cloudfront.net (for EKS Anywhere package ECR container images)
  • api.github.com (only if GitOps is enabled)

Nutanix information needed before creating the cluster

You need to get the following information before creating the cluster:

  • Static IP Addresses: You will need one IP address for the management cluster control plane endpoint, and a separate one for the controlplane of each workload cluster you add.

    Let’s say you are going to have the management cluster and two workload clusters. For those, you would need three IP addresses, one for each. All of those addresses will be configured the same way in the configuration file you will generate for each cluster.

    A static IP address will be used for control plane API server HA in each of your EKS Anywhere clusters. Choose IP addresses in your network range that do not conflict with other VMs and make sure they are excluded from your DHCP offering.

    An IP address will be the value of the property controlPlaneConfiguration.endpoint.host in the config file of the management cluster. A separate IP address must be assigned for each workload cluster.

    Import ova wizard

  • Prism Central FQDN or IP Address: The Prism Central fully qualified domain name or IP address.

  • Prism Element Cluster Name: The AOS cluster to deploy the EKS Anywhere cluster on.

  • VM Subnet Name: The VM network to deploy your EKS Anywhere cluster on.

  • Machine Template Image Name: The VM image to use for your EKS Anywhere cluster.

  • additionalTrustBundle (required if using a self-signed PC SSL certificate): The PEM encoded CA trust bundle of the root CA that issued the certificate for Prism Central.

4.10.3 - Preparing Nutanix Cloud Infrastructure for EKS Anywhere

Set up a Nutanix cluster to prepare it for EKS Anywhere

Certain resources must be in place with appropriate user permissions to create an EKS Anywhere cluster using the Nutanix provider.

Configuring Nutanix User

You need a Prism Admin user to create EKS Anywhere clusters on top of your Nutanix cluster.

Build Nutanix AHV node images

Follow the steps outlined in artifacts to create a Ubuntu-based image for Nutanix AHV and import it into the AOS Image Service.

4.10.4 - Create Nutanix cluster

Create an EKS Anywhere cluster on Nutanix Cloud Infrastructure with AHV

EKS Anywhere supports a Nutanix Cloud Infrastructure (NCI) provider for EKS Anywhere deployments. This document walks you through setting up EKS Anywhere on Nutanix Cloud Infrastructure with AHV in a way that:

  • Deploys an initial cluster in your Nutanix environment. That cluster can be used as a self-managed cluster (to run workloads) or a management cluster (to create and manage other clusters)
  • Deploys zero or more workload clusters from the management cluster

If your initial cluster is a management cluster, it is intended to stay in place so you can use it later to modify, upgrade, and delete workload clusters. Using a management cluster makes it faster to provision and delete workload clusters. It also lets you keep NCI credentials for a set of clusters in one place: on the management cluster. The alternative is to simply use your initial cluster to run workloads. See Cluster topologies for details.

Note: Before you create your cluster, you have the option of validating the EKS Anywhere bundle manifest container images by following instructions in the Verify Cluster Images page.

Prerequisite Checklist

EKS Anywhere needs to:

Also, see the Ports and protocols page for information on ports that need to be accessible from control plane, worker, and Admin machines.

Steps

The following steps are divided into two sections:

  • Create an initial cluster (used as a management or self-managed cluster)
  • Create zero or more workload clusters from the management cluster

Create an initial cluster

Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a self-managed cluster (for running workloads itself).

  1. Optional Configuration

    Set License Environment Variable

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    

    After you have created your eksa-mgmt-cluster.yaml and set your credential environment variables, you will be ready to create the cluster.

    Configure Curated Packages

    The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here . Cluster creation will succeed if authentication is not set up, but some warnings may be genered. Detailed package configurations can be found here .

    If you are going to use packages, set up authentication. These credentials should have limited capabilities :

    export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
    export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
    export EKSA_AWS_REGION="us-west-2"  
    
  2. Generate an initial cluster config (named mgmt for this example):

    CLUSTER_NAME=mgmt
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider nutanix > eksa-mgmt-cluster.yaml
    
  3. Modify the initial cluster config (eksa-mgmt-cluster.yaml) as follows:

    • Refer to Nutanix configuration for information on configuring this cluster config for a Nutanix provider.
    • Add Optional configuration settings as needed.
    • Create at least three control plane nodes, three worker nodes, and three etcd nodes, to provide high availability and rolling upgrades.
  4. Set Credential Environment Variables

    Before you create the initial cluster, you will need to set and export these environment variables for your Nutanix Prism Central user name and password. Make sure you use single quotes around the values so that your shell does not interpret the values:

    export EKSA_NUTANIX_USERNAME='billy'
    export EKSA_NUTANIX_PASSWORD='t0p$ecret'
    
  5. Create cluster

    For a regular cluster create (with internet access), type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
    

    For an airgapped cluster create, follow Preparation for airgapped deployments instructions, then type the following:

    eksctl anywhere create cluster \
       -f eksa-mgmt-cluster.yaml \
       --bundles-override ./eks-anywhere-downloads/bundle-release.yaml \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
    
  6. Once the cluster is created, you can access it with the generated KUBECONFIG file in your local directory:

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    
  7. Check the cluster nodes:

    To check that the cluster is ready, list the machines to see the control plane, and worker nodes:

    kubectl get machines -n eksa-system
    

    Example command output

       NAME              CLUSTER  NODENAME                                 PROVIDERID       PHASE     AGE   VERSION
       mgmt-4gtt2        mgmt     mgmt-control-plane-1670343878900-2m4ln   nutanix://xxxx   Running   11m   v1.24.7-eks-1-24-4
       mgmt-d42xn        mgmt     mgmt-control-plane-1670343878900-jbfxt   nutanix://xxxx   Running   11m   v1.24.7-eks-1-24-4
       mgmt-md-0-9868m   mgmt     mgmt-md-0-1670343878901-lkmxw            nutanix://xxxx   Running   11m   v1.24.7-eks-1-24-4
       mgmt-md-0-njpk2   mgmt     mgmt-md-0-1670343878901-9clbz            nutanix://xxxx   Running   11m   v1.24.7-eks-1-24-4
       mgmt-md-0-p4gp2   mgmt     mgmt-md-0-1670343878901-mbktx            nutanix://xxxx   Running   11m   v1.24.7-eks-1-24-4
       mgmt-zkwrr        mgmt     mgmt-control-plane-1670343878900-jrdkk   nutanix://xxxx   Running   11m   v1.24.7-eks-1-24-4
    
  8. Check the initial cluster’s CRD:

    To ensure you are looking at the initial cluster, list the cluster CRD to see that the name of its management cluster is itself:

    kubectl get clusters mgmt -o yaml
    

    Example command output

    ...
    kubernetesVersion: "1.28"
    managementCluster:
      name: mgmt
    workerNodeGroupConfigurations:
    ...
    

Create separate workload clusters

Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.

  1. Set License Environment Variable (Optional)

    Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster if you are licensing an existing cluster):

    export EKSA_LICENSE='my-license-here'
    
  2. Generate a workload cluster config:

    CLUSTER_NAME=w01
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider nutanix > eksa-w01-cluster.yaml
    

    Refer to the initial config described earlier for the required and optional settings. Ensure workload cluster object names (Cluster, NutanixDatacenterConfig, NutanixMachineConfig, etc.) are distinct from management cluster object names.

  3. Be sure to set the managementCluster field to identify the name of the management cluster.

    For example, the management cluster, mgmt is defined for our workload cluster w01 as follows:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: w01
    spec:
      managementCluster:
        name: mgmt
    
  4. Create a workload cluster

    To create a new workload cluster from your management cluster run this command, identifying:

    • The workload cluster YAML file
    • The initial cluster’s kubeconfig (this causes the workload cluster to be managed from the management cluster)
    eksctl anywhere create cluster \
       -f eksa-w01-cluster.yaml  \
       --kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig \
       # --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
    

    As noted earlier, adding the --kubeconfig option tells eksctl to use the management cluster identified by that kubeconfig file to create a different workload cluster.

  5. Check the workload cluster:

    You can now use the workload cluster as you would any Kubernetes cluster. Change your kubeconfig to point to the new workload cluster (for example, w01), then run the test application with:

    export CLUSTER_NAME=w01
    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
    

    Verify the test application in the deploy test application section.

  6. Add more workload clusters:

    To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml), modifying resource names, and running the create cluster command again.

Next steps:

  • See the Cluster management section for more information on common operational tasks like scaling and deleting the cluster.

  • See the Package management section for more information on post-creation curated packages installation.

4.10.5 - Configure for Nutanix

Full EKS Anywhere configuration reference for a Nutanix cluster

This is a generic template with detailed descriptions below for reference.

The following additional optional configuration can also be included:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
 name: mgmt
 namespace: default
spec:
 clusterNetwork:
   cniConfig:
     cilium: {}
   pods:
     cidrBlocks:
       - 192.168.0.0/16
   services:
     cidrBlocks:
       - 10.96.0.0/16
 controlPlaneConfiguration:
   count: 3
   endpoint:
     host: ""
   machineGroupRef:
     kind: NutanixMachineConfig
     name: mgmt-cp-machine
 datacenterRef:
   kind: NutanixDatacenterConfig
   name: nutanix-cluster
 externalEtcdConfiguration:
   count: 3
   machineGroupRef:
     kind: NutanixMachineConfig
     name: mgmt-etcd
 kubernetesVersion: "1.28"
 workerNodeGroupConfigurations:
   - count: 1
     machineGroupRef:
       kind: NutanixMachineConfig
       name: mgmt-machine
     name: md-0
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: NutanixDatacenterConfig
metadata:
 name: nutanix-cluster
 namespace: default
spec:
 endpoint: pc01.cloud.internal
 port: 9440
 credentialRef:
   kind: Secret
   name: nutanix-credentials
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: NutanixMachineConfig
metadata:
 annotations:
   anywhere.eks.amazonaws.com/control-plane: "true"
 name: mgmt-cp-machine
 namespace: default
spec:
 cluster:
   name: nx-cluster-01
   type: name
 image:
   name: eksa-ubuntu-2004-kube-v1.28
   type: name
 memorySize: 4Gi
 osFamily: ubuntu
 subnet:
   name: vm-network
   type: name
 systemDiskSize: 40Gi
 project:
   type: name
   name: my-project
 users:
   - name: eksa
     sshAuthorizedKeys:
       - ssh-rsa AAAA…
 vcpuSockets: 2
 vcpusPerSocket: 1
 additionalCategories:
   - key: my-category
     value: my-category-value
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: NutanixMachineConfig
metadata:
 name: mgmt-etcd
 namespace: default
spec:
 cluster:
   name: nx-cluster-01
   type: name
 image:
   name: eksa-ubuntu-2004-kube-v1.28
   type: name
 memorySize: 4Gi
 osFamily: ubuntu
 subnet:
   name: vm-network
   type: name
 systemDiskSize: 40Gi
 project:
   type: name
   name: my-project
 users:
   - name: eksa
     sshAuthorizedKeys:
       - ssh-rsa AAAA…
 vcpuSockets: 2
 vcpusPerSocket: 1
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: NutanixMachineConfig
metadata:
 name: mgmt-machine
 namespace: default
spec:
 cluster:
   name: nx-cluster-01
   type: name
 image:
   name: eksa-ubuntu-2004-kube-v1.28
   type: name
 memorySize: 4Gi
 osFamily: ubuntu
 subnet:
   name: vm-network
   type: name
 systemDiskSize: 40Gi
 project:
   type: name
   name: my-project
 users:
   - name: eksa
     sshAuthorizedKeys:
       - ssh-rsa AAAA…
 vcpuSockets: 2
 vcpusPerSocket: 1
---

Cluster Fields

name (required)

Name of your cluster mgmt in this example.

clusterNetwork (required)

Network configuration.

clusterNetwork.cniConfig (required)

CNI plugin configuration. Supports cilium.

clusterNetwork.cniConfig.cilium.policyEnforcementMode (optional)

Optionally specify a policyEnforcementMode of default, always or never.

clusterNetwork.cniConfig.cilium.egressMasqueradeInterfaces (optional)

Optionally specify a network interface name or interface prefix used for masquerading. See EgressMasqueradeInterfaces option.

clusterNetwork.cniConfig.cilium.skipUpgrade (optional)

When true, skip Cilium maintenance during upgrades. Also see Use a custom CNI.

clusterNetwork.cniConfig.cilium.routingMode (optional)

Optionally specify the routing mode. Accepts default and direct. Also see RoutingMode option.

clusterNetwork.cniConfig.cilium.ipv4NativeRoutingCIDR (optional)

Optionally specify the CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.cniConfig.cilium.ipv6NativeRoutingCIDR (optional)

Optionally specify the IPv6 CIDR to use when RoutingMode is set to direct. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT.

clusterNetwork.pods.cidrBlocks[0] (required)

The pod subnet specified in CIDR notation. Only 1 pod CIDR block is permitted. The CIDR block should not conflict with the host or service network ranges.

clusterNetwork.services.cidrBlocks[0] (required)

The service subnet specified in CIDR notation. Only 1 service CIDR block is permitted. This CIDR block should not conflict with the host or pod network ranges.

clusterNetwork.dns.resolvConf.path (optional)

File path to a file containing a custom DNS resolver configuration.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with Nutanix specific configuration for your nodes. See NutanixMachineConfig fields below.

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other VMs.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster creation process are here .

workerNodeGroupConfigurations (required)

This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.

workerNodeGroupConfigurations[*].count (optional)

Number of worker nodes. (default: 1) It will be ignored if the cluster autoscaler curated package is installed and autoscalingConfiguration is used to specify the desired range of replicas.

Refers to troubleshooting machine health check remediation not allowed and choose a sufficient number to allow machine health check remediation.

workerNodeGroupConfigurations[*].machineGroupRef (required)

Refers to the Kubernetes object with Nutanix specific configuration for your nodes. See NutanixMachineConfig fields below.

workerNodeGroupConfigurations[*].name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations[*].autoscalingConfiguration.minCount (optional)

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].autoscalingConfiguration.maxCount (optional)

Maximum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations[*].kubernetesVersion (optional)

The Kubernetes version you want to use for this worker node group. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

externalEtcdConfiguration.count (optional)

Number of etcd members

externalEtcdConfiguration.machineGroupRef (optional)

Refers to the Kubernetes object with Nutanix specific configuration for your etcd members. See NutanixMachineConfig fields below.

datacenterRef (required)

Refers to the Kubernetes object with Nutanix environment specific configuration. See NutanixDatacenterConfig fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values: 1.28, 1.27, 1.26, 1.25, 1.24

NutanixDatacenterConfig Fields

endpoint (required)

The Prism Central server fully qualified domain name or IP address. If the server IP is used, the PC SSL certificate must have an IP SAN configured.

port (required)

The Prism Central server port. (Default: 9440)

credentialRef (required)

Reference to the Kubernetes secret that contains the Prism Central credentials.

insecure (optional)

Set insecure to true if the Prism Central server does not have a valid certificate. This is not recommended for production use cases. (Default: false)

additionalTrustBundle (optional; required if using a self-signed PC SSL certificate)

The PEM encoded CA trust bundle.

The additionalTrustBundle needs to be populated with the PEM-encoded x509 certificate of the Root CA that issued the certificate for Prism Central. Suggestions on how to obtain this certificate are here .

Example:

 additionalTrustBundle: |
    -----BEGIN CERTIFICATE-----
    <certificate string>
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    <certificate string>
    -----END CERTIFICATE-----    

NutanixMachineConfig Fields

cluster (required)

Reference to the Prism Element cluster.

cluster.type (required)

Type to identify the Prism Element cluster. (Permitted values: name or uuid)

cluster.name (required)

Name of the Prism Element cluster.

cluster.uuid (required)

UUID of the Prism Element cluster.

image (required)

Reference to the OS image used for the system disk.

image.type (required)

Type to identify the OS image. (Permitted values: name or uuid)

image.name (name or UUID required)

Name of the image The image.name must contain the Cluster.Spec.KubernetesVersion or Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, image.name must include 1.24, 1_24, 1-24 or 124.

image.uuid (name or UUID required)

UUID of the image The name of the image associated with the uuid must contain the Cluster.Spec.KubernetesVersion or Cluster.Spec.WorkerNodeGroupConfiguration[].KubernetesVersion version (in case of modular upgrade). For example, if the Kubernetes version is 1.24, the name associated with image.uuid field must include 1.24, 1_24, 1-24 or 124.

memorySize (optional)

Size of RAM on virtual machines (Default: 4Gi)

osFamily (optional)

Operating System on virtual machines. Permitted values: ubuntu and redhat. (Default: ubuntu)

subnet (required)

Reference to the subnet to be assigned to the VMs.

subnet.name (name or UUID required)

Name of the subnet.

subnet.type (required)

Type to identify the subnet. (Permitted values: name or uuid)

subnet.uuid (name or UUID required)

UUID of the subnet.

systemDiskSize (optional)

Amount of storage assigned to the system disk. (Default: 40Gi)

vcpuSockets (optional)

Amount of vCPU sockets. (Default: 2)

vcpusPerSocket (optional)

Amount of vCPUs per socket. (Default: 1)

project (optional)

Reference to an existing project used for the virtual machines.

project.type (required)

Type to identify the project. (Permitted values: name or uuid)

project.name (name or UUID required)

Name of the project

project.uuid (name or UUID required)

UUID of the project

additionalCategories (optional)

Reference to a list of existing Nutanix Categories to be assigned to virtual machines.

additionalCategories[0].key

Nutanix Category to add to the virtual machine.

additionalCategories[0].value

Value of the Nutanix Category to add to the virtual machine

users (optional)

The users you want to configure to access your virtual machines. Only one is permitted at this time.

users[0].name (optional)

The name of the user you want to configure to access your virtual machines through ssh.

The default is eksa if osFamily=ubuntu

users[0].sshAuthorizedKeys (optional)

The SSH public keys you want to configure to access your virtual machines through ssh (as described below). Only 1 is supported at this time.

users[0].sshAuthorizedKeys[0] (optional)

This is the SSH public key that will be placed in authorized_keys on all EKS Anywhere cluster VMs so you can ssh into them. The user will be what is defined under name above. For example:

ssh -i <private-key-file> <user>@<VM-IP>

The default is generating a key in your $(pwd)/<cluster-name> folder when not specifying a value

4.10.6 -

  • Prism Central endpoint (must be accessible to EKS Anywhere clusters)
  • Prism Element Data Services IP and CVM endpoints (for CSI storage connections)
  • public.ecr.aws (for pulling EKS Anywhere container images)
  • anywhere-assets.eks.amazonaws.com (to download the EKS Anywhere binaries and manifests)
  • distro.eks.amazonaws.com (to download EKS Distro binaries and manifests)
  • d2glxqk2uabbnd.cloudfront.net (for EKS Anywhere and EKS Distro ECR container images)
  • api.ecr.us-west-2.amazonaws.com (for EKS Anywhere package authentication matching your region)
  • d5l0dvt14r5h8.cloudfront.net (for EKS Anywhere package ECR container images)
  • api.github.com (only if GitOps is enabled)

4.11 - Create Docker Cluster (dev only)

Create an EKS Anywhere cluster with Docker on your local machine, laptop, or cloud instance

EKS Anywhere docker provider deployments

EKS Anywhere supports a Docker provider for development and testing use cases only. This allows you to try EKS Anywhere on your local machine or laptop before deploying to other infrastructure such as vSphere or bare metal.

Prerequisites

System and network requirements

  • Mac OS 10.15+ / Ubuntu 20.04.2 LTS or 22.04 LTS / RHEL or Rocky Linux 8.8+
  • 4 CPU cores
  • 16GB memory
  • 30GB free disk space
  • If you are running in an airgapped environment, the Admin machine must be amd64.

Here are a few other things to keep in mind:

  • If you are using Ubuntu, use the Docker CE installation instructions to install Docker and not the Snap installation, as described here.

  • If you are using EKS Anywhere v0.15 or earlier and Ubuntu 21.10 or 22.04, you will need to switch from cgroups v2 to cgroups v1. For details, see Troubleshooting Guide.

Tools

Install EKS Anywhere CLI tools

To get started with EKS Anywhere, you must first install the eksctl CLI and the eksctl anywhere plugin. This is the primary interface for EKS Anywhere and what you will use to create a local Docker cluster. The EKS Anywhere plugin requires eksctl version 0.66.0 or newer.

Homebrew

Note if you already have eksctl installed, you can install the eksctl anywhere plugin manually following the instructions in the following section. This package also installs kubectl and aws-iam-authenticator.

brew install aws/tap/eks-anywhere

Manual

Install eksctl

curl "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \
    --silent --location \
    | tar xz -C /tmp
sudo install -m 0755 /tmp/eksctl /usr/local/bin/eksctl

Install the eksctl-anywhere plugin

RELEASE_VERSION=$(curl https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml --silent --location | yq ".spec.latestVersion")
EKS_ANYWHERE_TARBALL_URL=$(curl https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml --silent --location | yq ".spec.releases[] | select(.version==\"$RELEASE_VERSION\").eksABinary.$(uname -s | tr A-Z a-z).uri")
curl $EKS_ANYWHERE_TARBALL_URL \
    --silent --location \
    | tar xz ./eksctl-anywhere
sudo install -m 0755 ./eksctl-anywhere /usr/local/bin/eksctl-anywhere

Install kubectl. See the Kubernetes documentation for more information.

export OS="$(uname -s | tr A-Z a-z)" ARCH=$(test "$(uname -m)" = 'x86_64' && echo 'amd64' || echo 'arm64')
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/${OS}/${ARCH}/kubectl"
sudo install -m 0755 ./kubectl /usr/local/bin/kubectl

Create a local Docker cluster

  1. Generate a cluster config. The cluster config will contain the settings for your local Docker cluster. The eksctl anywhere generate command populates a cluster config with EKS Anywhere defaults and best practices.

    CLUSTER_NAME=mgmt
    eksctl anywhere generate clusterconfig $CLUSTER_NAME \
       --provider docker > $CLUSTER_NAME.yaml
    

    The command above creates a file named eksa-cluster.yaml with the contents below in the path where it is executed. The configuration specification is divided into two sections: Cluster and DockerDatacenterConfig. These are the minimum configuration settings you must provide to create a Docker cluster. You can optionally configure OIDC, etcd, proxy, and GitOps as described here.

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
       name: mgmt
    spec:
       clusterNetwork:
          cniConfig:
             cilium: {}
          pods:
             cidrBlocks:
                - 192.168.0.0/16
          services:
             cidrBlocks:
                - 10.96.0.0/12
       controlPlaneConfiguration:
          count: 1
       datacenterRef:
          kind: DockerDatacenterConfig
          name: mgmt
       externalEtcdConfiguration:
          count: 1
       kubernetesVersion: "1.28"
       managementCluster:
          name: mgmt
       workerNodeGroupConfigurations:
          - count: 1
            name: md-0
    ---
    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: DockerDatacenterConfig
    metadata:
       name: mgmt
    spec: {}
    
    
  2. Create Docker Cluster. Note the following command may take several minutes to complete. You can run the command with -v 6 to increase logging verbosity to see the progress of the command.

    For a regular cluster create (with internet access), type the following:

    eksctl anywhere create cluster -f $CLUSTER_NAME.yaml
    

    For an airgapped cluster create, follow Preparation for airgapped deployments instructions, then type the following:

    eksctl anywhere create cluster -f $CLUSTER_NAME.yaml --bundles-override ./eks-anywhere-downloads/bundle-release.yaml
    

    Expand for sample output:

    Performing setup and validations
    ✅ validation succeeded {"validation": "docker Provider setup is valid"}
    Creating new bootstrap cluster
    Installing cluster-api providers on bootstrap cluster
    Provider specific setup
    Creating new workload cluster
    Installing networking on workload cluster
    Installing cluster-api providers on workload cluster
    Moving cluster management from bootstrap to workload cluster
    Installing EKS-A custom components (CRD and controller) on workload cluster
    Creating EKS-A CRDs instances on workload cluster
    Installing GitOps Toolkit on workload cluster
    GitOps field not specified, bootstrap flux skipped
    Deleting bootstrap cluster
    🎉 Cluster created!
    ----------------------------------------------------------------------------------
    The Amazon EKS Anywhere Curated Packages are only available to customers with the
    Amazon EKS Anywhere Enterprise Subscription
    ----------------------------------------------------------------------------------
    ...
    

    NOTE: to install curated packages during cluster creation, use --install-packages packages.yaml flag

  3. Access Docker cluster

    Once the cluster is created you can use it with the generated kubeconfig in the local directory. If you used the same naming conventions as the example above, you will find a eksa-cluster/eksa-cluster-eks-a-cluster.kubeconfig in the directory where you ran the commands.

    export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
    kubectl get ns
    

    Example command output

    NAME                                STATUS   AGE
    capd-system                         Active   21m
    capi-kubeadm-bootstrap-system       Active   21m
    capi-kubeadm-control-plane-system   Active   21m
    capi-system                         Active   21m
    capi-webhook-system                 Active   21m
    cert-manager                        Active   22m
    default                             Active   23m
    eksa-packages                       Active   23m
    eksa-system                         Active   20m
    kube-node-lease                     Active   23m
    kube-public                         Active   23m
    kube-system                         Active   23m
    

    You can now use the cluster like you would any Kubernetes cluster.

  4. The following command will deploy a test application:

    kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
    

    To interact with the deployed application, review the steps in the Deploy test workload page .

Next steps:

  • See the Cluster management section for more information on common operational tasks like scaling and deleting the cluster.

  • See the Package management section for more information on post-creation curated packages installation.

4.12 - Optional Configuration

Optional Config references for EKS Anywhere clusters such as etcd, OS, CNI, IRSA, proxy, and registry mirror

The configuration pages below describe optional features that you can add to your EKS Anywhere provider’s clusterspec file. See each provider’s installation section for details on which optional features are supported.

4.12.1 - etcd

EKS Anywhere cluster yaml etcd specification reference

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

There are two types of etcd topologies for configuring a Kubernetes cluster:

  • Stacked: The etcd members and control plane components are colocated (run on the same node/machines)
  • Unstacked/External: With the unstacked or external etcd topology, etcd members have dedicated machines and are not colocated with control plane components

The unstacked etcd topology is recommended for a HA cluster for the following reasons:

  • External etcd topology decouples the control plane components and etcd member. For example, if a control plane-only node fails, or if there is a memory leak in a component like kube-apiserver, it won’t directly impact an etcd member.
  • Etcd is resource intensive, so it is safer to have dedicated nodes for etcd, since it could use more disk space or higher bandwidth. Having a separate etcd cluster for these reasons could ensure a more resilient HA setup.

EKS Anywhere supports both topologies. In order to configure a cluster with the unstacked/external etcd topology, you need to configure your cluster by updating the configuration file before creating the cluster. This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   clusterNetwork:
      pods:
         cidrBlocks:
            - 192.168.0.0/16
      services:
         cidrBlocks:
            - 10.96.0.0/12
      cniConfig:
         cilium: {}
   controlPlaneConfiguration:
      count: 1
      endpoint:
         host: ""
      machineGroupRef:
         kind: VSphereMachineConfig
         name: my-cluster-name-cp
   datacenterRef:
      kind: VSphereDatacenterConfig
      name: my-cluster-name
   # etcd configuration
   externalEtcdConfiguration:
      count: 3
      machineGroupRef:
        kind: VSphereMachineConfig
        name: my-cluster-name-etcd
   kubernetesVersion: "1.27"
   workerNodeGroupConfigurations:
      - count: 1
        machineGroupRef:
           kind: VSphereMachineConfig
           name: my-cluster-name
        name: md-0

externalEtcdConfiguration (under Cluster)

External etcd configuration for your Kubernetes cluster.

count (required)

This determines the number of etcd members in the cluster. The recommended number is 3.

machineGroupRef (required)

Refers to the Kubernetes object with provider specific configuration for your nodes.

4.12.2 - Encrypting Confidential Data at Rest

EKS Anywhere cluster specification for encryption of etcd data at-rest

You can configure EKS Anywhere clusters to encrypt confidential API resource data, such as secrets, at-rest in etcd using a KMS encryption provider. EKS Anywhere supports a hybrid model for configuring etcd encryption where cluster admins are responsible for deploying and maintaining the KMS provider on the cluster and EKS Anywhere handles configuring kube-apiserver with the KMS properties.

Because of this model, etcd encryption can only be enabled on cluster upgrades after the KMS provider has been deployed on the cluster.

Before you begin

Before enabling etcd encryption, make sure you have done the following:

Example etcd encryption configuration

The following cluster spec enables etcd encryption configuration:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster
  namespace: default
spec:
  ...
  etcdEncryption:
  - providers:
    - kms:
        cachesize: 1000
        name: example-kms-config
        socketListenAddress: unix:///var/run/kmsplugin/socket.sock
        timeout: 3s
    resources:
    - secrets

Description of etcd encryption fields

etcdEncryption

Key used to specify etcd encryption configuration for a cluster. This field is only supported on cluster upgrades.

  • providers

    Key used to specify which encryption provider to use. Currently, only one provider can be configured.

    • kms

      Key used to configure KMS encryption provider.

      • name

        Key used to set the name of the KMS plugin. This cannot be changed once set.

      • endpoint

        Key used to specify the listen address of the gRPC server (KMS plugin). The endpoint is a UNIX domain socket.

      • cachesize

        Number of data encryption keys (DEKs) to be cached in the clear. When cached, DEKs can be used without another call to the KMS; whereas DEKs that are not cached require a call to the KMS to unwrap. If cachesize isn’t specified, a default of 1000 is used.

      • timeout

        How long should kube-apiserver wait for kms-plugin to respond before returning an error. If a timeout isn’t specified, a default timeout of 3s is used.

  • resources

    Key used to specify a list of resources that should be encrypted using the corresponding encryption provider. These can be native Kubernetes resources such as secrets and configmaps or custom resource definitions such as clusters.anywhere.eks.amazonaws.com.

Example AWS Encryption Provider DaemonSet

Here’s a sample AWS encryption provider daemonset configuration.

Expand
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: aws-encryption-provider
  name: aws-encryption-provider
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: aws-encryption-provider
  template:
    metadata:
      labels:
        app: aws-encryption-provider
    spec:
      containers:
      - image: <AWS_ENCRYPTION_PROVIDER_IMAGE>    # Specify the AWS KMS encryption provider image 
        name: aws-encryption-provider
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        command:
        - /aws-encryption-provider
        - --key=<KEY_ARN>                         # Specify the arn of KMS key to be used for encryption/decryption
        - --region=<AWS_REGION>                   # Specify the region in which the KMS key exists
        - --listen=<KMS_SOCKET_LISTEN_ADDRESS>    # Specify a socket listen address for the KMS provider. Example: /var/run/kmsplugin/socket.sock
        ports:
        - containerPort: 8080
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
        volumeMounts:
          - mountPath: /var/run/kmsplugin
            name: var-run-kmsplugin
          - mountPath: /root/.aws
            name: aws-credentials
      tolerations:
      - key: "node-role.kubernetes.io/master"
        effect: "NoSchedule"
      - key: "node-role.kubernetes.io/control-plane"
        effect: "NoSchedule"
      volumes:
      - hostPath:
          path: /var/run/kmsplugin
          type: DirectoryOrCreate
        name: var-run-kmsplugin
      - hostPath:
          path: /etc/kubernetes/aws
          type: DirectoryOrCreate
        name: aws-credentials

4.12.3 - Operating system

EKS Anywhere cluster yaml specification for host OS configuration

Host OS Configuration

You can configure certain host OS settings through EKS Anywhere.

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

The following cluster spec shows an example of how to configure host OS settings:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereMachineConfig        # Replace "VSphereMachineConfig" with "TinkerbellMachineConfig" for Tinkerbell clusters
metadata:
  name: machine-config
spec:
  ...
  hostOSConfiguration:
    ntpConfiguration:
      servers:
        - time-a.ntp.local
        - time-b.ntp.local
    certBundles:
    - name: "bundle_1"
      data: |
        -----BEGIN CERTIFICATE-----
        MIIF1DCCA...
        ...
        es6RXmsCj...
        -----END CERTIFICATE-----

        -----BEGIN CERTIFICATE-----
        ...
        -----END CERTIFICATE-----        
    bottlerocketConfiguration:
      kubernetes:
        allowedUnsafeSysctls:
          - "net.core.somaxconn"
          - "net.ipv4.ip_local_port_range"
        clusterDNSIPs:
          - 10.96.0.10
        maxPods: 100
      kernel:
        sysctlSettings:
          net.core.wmem_max: "8388608"
          net.core.rmem_max: "8388608"
          ...
      boot:
        bootKernelParameters:
          slub_debug:
          - "options,slabs"
          ...

Host OS Configuration Spec Details

hostOSConfiguration

Top level object used for host OS configurations.

  • ntpConfiguration

    Key used for configuring NTP servers on your EKS Anywhere cluster nodes.

    • servers
      Servers is a list of NTP servers that should be configured on EKS Anywhere cluster nodes.
  • certBundles

    Key used for configuring custom trusted CA certs on your EKS Anywhere cluster nodes. Multiple cert bundles can be configured.

    • name

    Name of the cert bundle that should be configured on EKS Anywhere cluster nodes. This must be a unique name for each entry

    • data

    Data of the cert bundle that should be configured on EKS Anywhere cluster nodes. This takes in a PEM formatted cert bundle and can contain more than one CA cert per entry.


  • bottlerocketConfiguration

    Key used for configuring Bottlerocket-specific settings on EKS Anywhere cluster nodes. These settings are only valid for Bottlerocket.

    • kubernetes

      Key used for configuring Bottlerocket Kubernetes settings.

      • allowedUnsafeSysctls

        List of unsafe sysctls that should be enabled on the node.

      • clusterDNSIPs

        List of IPs of DNS service(s) running in the kubernetes cluster.

      • maxPods

        Maximum number of pods that can be scheduled on each node.

    • kernel

      Key used for configuring Bottlerocket Kernel settings.

      • sysctlSettings
        Map of kernel sysctl settings that should be enabled on the node.
    • boot

      Key used for configuring Bottlerocket Boot settings.

      • bootKernelParameters
        Map of Boot Kernel parameters Bottlerocket should configure.

4.12.4 - Container Networking Interface

EKS Anywhere cluster yaml cni plugin specification reference

Specifying CNI Plugin in EKS Anywhere cluster spec

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

EKS Anywhere currently supports two CNI plugins: Cilium and Kindnet. Only one of them can be selected for a cluster, and the plugin cannot be changed once the cluster is created. Up until the 0.7.x releases, the plugin had to be specified using the cni field on cluster spec. Starting with release 0.8, the plugin should be specified using the new cniConfig field as follows:

  • For selecting Cilium as the CNI plugin:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: my-cluster-name
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
          - 192.168.0.0/16
        services:
          cidrBlocks:
          - 10.96.0.0/12
        cniConfig:
          cilium: {}
    

    EKS Anywhere selects this as the default plugin when generating a cluster config.

  • Or for selecting Kindnetd as the CNI plugin:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: my-cluster-name
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
          - 192.168.0.0/16
        services:
          cidrBlocks:
          - 10.96.0.0/12
        cniConfig:
          kindnetd: {}
    

NOTE: EKS Anywhere allows specifying only 1 plugin for a cluster and does not allow switching the plugins after the cluster is created.

Policy Configuration options for Cilium plugin

Cilium accepts policy enforcement modes from the users to determine the allowed traffic between pods. The allowed values for this mode are: default, always and never. Please refer the official Cilium documentation for more details on how each mode affects the communication within the cluster and choose a mode accordingly. You can choose to not set this field so that cilium will be launched with the default mode. Starting release 0.8, Cilium’s policy enforcement mode can be set through the cluster spec as follows:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium:
        policyEnforcementMode: "always"

Please note that if the always mode is selected, all communication between pods is blocked unless NetworkPolicy objects allowing communication are created. In order to ensure that the cluster gets created successfully, EKS Anywhere will create the required NetworkPolicy objects for all its core components. But it is up to the user to create the NetworkPolicy objects needed for the user workloads once the cluster is created.

Network policies created by EKS Anywhere for “always” mode

As mentioned above, if Cilium is configured with policyEnforcementMode set to always, EKS Anywhere creates NetworkPolicy objects to enable communication between its core components. EKS Anywhere will create NetworkPolicy resources in the following namespaces allowing all ingress/egress traffic by default:

  • kube-system
  • eksa-system
  • All core Cluster API namespaces:
    • capi-system
    • capi-kubeadm-bootstrap-system
    • capi-kubeadm-control-plane-system
    • etcdadm-bootstrap-provider-system
    • etcdadm-controller-system
    • cert-manager
  • Infrastructure provider’s namespace (for instance, capd-system OR capv-system)
  • If Gitops is enabled, then the gitops namespace (flux-system by default)

This is the NetworkPolicy that will be created in these namespaces for the cluster:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-ingress-egress
  namespace: test
spec:
  podSelector: {}
  ingress:
  - {}
  egress:
  - {}
  policyTypes:
  - Ingress
  - Egress

Switching the Cilium policy enforcement mode

The policy enforcement mode for Cilium can be changed as a part of cluster upgrade through the cli upgrade command.

  1. Switching to always mode: When switching from default/never to always mode, EKS Anywhere will create the required NetworkPolicy objects for its core components (listed above). This will ensure that the cluster gets upgraded successfully. But it is up to the user to create the NetworkPolicy objects required for the user workloads.

  2. Switching from always mode: When switching from always to default mode, EKS Anywhere will not delete any of the existing NetworkPolicy objects, including the ones required for EKS Anywhere components (listed above). The user must delete NetworkPolicy objects as needed.

EgressMasqueradeInterfaces option for Cilium plugin

Cilium accepts the EgressMasqueradeInterfaces option from users to limit which interfaces masquerading is performed on. The allowed values for this mode are an interface name such as eth0 or an interface prefix such as eth+. Please refer to the official Cilium documentation for more details on how this option affects masquerading traffic.

By default, masquerading will be performed on all traffic leaving on a non-Cilium network device. This only has an effect on traffic egressing from a node to an external destination not part of the cluster and does not affect routing decisions.

This field can be set as follows:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium:
        egressMasqueradeInterfaces: "eth0"

RoutingMode option for Cilium plugin

By default all traffic is sent by Cilium over Geneve tunneling on the network. The routingMode option allows users to switch to native routing instead.

This field can be set as follows:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium:
        routingMode: "direct"

Use a custom CNI

EKS Anywhere can be configured to skip EKS Anywhere’s default Cilium CNI upgrades via the skipUpgrade field. skipUpgrade can be true or false. When not set, it defaults to false.

When creating a new cluster with skipUpgrade enabled, EKS Anywhere Cilium will be installed as it is required to successfully provision an EKS Anywhere cluster. When the cluster successfully provisions, EKS Anywhere Cilium may be uninstalled and replaced with a different CNI. Subsequent upgrades to the cluster will not attempt to upgrade or re-install EKS Anywhere Cilium.

Once enabled, skipUpgrade cannot be disabled.

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium:
        skipUpgrade: true

The Cilium CLI can be used to uninstall EKS Anywhere Cilium via cilium uninstall. See the replacing Cilium task for a walkthrough on how to successfully replace EKS Anywhere Cilium.

Node IPs configuration option

Starting with release v0.10, the node-cidr-mask-size flag for Kubernetes controller manager (kube-controller-manager) is configurable via the EKS anywhere cluster spec. The clusterNetwork.nodes being an optional field, is not generated in the EKS Anywhere spec using generate clusterconfig command. This block for nodes will need to be manually added to the cluster spec under the clusterNetwork section:

  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium: {}
    nodes:
      cidrMaskSize: 24

If the user does not specify the clusterNetwork.nodes field in the cluster yaml spec, the value for this flag defaults to 24 for IPv4. Please note that this mask size needs to be greater than the pods CIDR mask size. In the above spec, the pod CIDR mask size is 16 and the node CIDR mask size is 24. This ensures the cluster 256 blocks of /24 networks. For example, node1 will get 192.168.0.0/24, node2 will get 192.168.1.0/24, node3 will get 192.168.2.0/24 and so on.

To support more than 256 nodes, the cluster CIDR block needs to be large, and the node CIDR mask size needs to be small, to support that many IPs. For instance, to support 1024 nodes, a user can do any of the following things

  • Set the pods cidr blocks to 192.168.0.0/16 and node cidr mask size to 26
  • Set the pods cidr blocks to 192.168.0.0/15 and node cidr mask size to 25

Please note that the node-cidr-mask-size needs to be large enough to accommodate the number of pods you want to run on each node. A size of 24 will give enough IP addresses for about 250 pods per node, however a size of 26 will only give you about 60 IPs. This is an immutable field, and the value can’t be updated once the cluster has been created.

4.12.5 - IAM Roles for Service Accounts configuration

EKS Anywhere cluster spec for IAM Roles for Service Accounts (IRSA)

IAM Role for Service Account on EKS Anywhere clusters with self-hosted signing keys

IAM Roles for Service Account (IRSA) enables applications running in clusters to authenticate with AWS services using IAM roles. The current solution for leveraging this in EKS Anywhere involves creating your own OIDC provider for the cluster, and hosting your cluster’s public service account signing key. The public keys along with the OIDC discovery document should be hosted somewhere that AWS STS can discover it.

The steps below are based on the guide for configuring IRSA for DIY Kubernetes, with modifications specific to EKS Anywhere’s cluster provisioning workflow. The main modification is the process of generating the keys.json document. As per the original guide, the user has to create the service account signing keys, and then use that to create the keys.json document prior to cluster creation. This order is reversed for EKS Anywhere clusters, so you will create the cluster first, and then retrieve the service account signing key generated by the cluster, and use it to create the keys.json document. The sections below show how to do this in detail.

Create an OIDC provider and make its discovery document publicly accessible

You must use a single OIDC provider per EKS Anywhere cluster, which is the best practice to prevent a token from one cluster being used with another cluster. These steps describe the process of using a public S3 bucket to host the OIDC discovery.json and keys.json documents.

  1. Create an S3 bucket to host the public signing keys and OIDC discovery document for your cluster . Make a note of the $HOSTNAME and $ISSUER_HOSTPATH.

  2. Create the OIDC discovery document as follows:

    cat <<EOF > discovery.json
    {
        "issuer": "https://$ISSUER_HOSTPATH",
        "jwks_uri": "https://$ISSUER_HOSTPATH/keys.json",
        "authorization_endpoint": "urn:kubernetes:programmatic_authorization",
        "response_types_supported": [
            "id_token"
        ],
        "subject_types_supported": [
            "public"
        ],
        "id_token_signing_alg_values_supported": [
            "RS256"
        ],
        "claims_supported": [
            "sub",
            "iss"
        ]
    }
    EOF
    
  3. Upload the discovery.json file to the S3 bucket:

    aws s3 cp --acl public-read ./discovery.json s3://$S3_BUCKET/.well-known/openid-configuration
    
  4. Create an OIDC provider for your cluster. Set the Provider URL to https://$ISSUER_HOSTPATH and Audience to sts.amazonaws.com.

  5. Make a note of the Provider field of OIDC provider after it is created.

  6. Assign an IAM role to the OIDC provider.

    1. Navigate to the AWS IAM Console.

    2. Click on the OIDC provider.

    3. Click Assign role.

    4. Select Create a new role.

    5. Select Web identity as the trusted entity.

    6. In the Web identity section:

      • If your Identity provider is not auto selected, select it.
      • Select sts.amazonaws.com as the Audience.
    7. Click Next.

    8. Configure your desired Permissions poilicies.

    9. Below is a sample trust policy of IAM role for your pods. Replace ACCOUNT_ID, ISSUER_HOSTPATH, NAMESPACE and SERVICE_ACCOUNT. Example: Scoped to a service account

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "Federated": "arn:aws:iam::ACCOUNT_ID:oidc-provider/ISSUER_HOSTPATH"
                  },
                  "Action": "sts:AssumeRoleWithWebIdentity",
                  "Condition": {
                      "StringEquals": {
                          "ISSUER_HOSTPATH:sub": "system:serviceaccount:NAMESPACE:SERVICE_ACCOUNT"
                      },
                  }
              }
          ]
      }
      
    10. Create the IAM Role and make a note of the Role name.

    11. After the cluster is created you can grant service accounts access to the role by modifying the trust relationship. See the How to use trust policies with IAM Roles for more information on trust policies. Refer to Configure the trust relationship for the OIDC provider’s IAM Role for a working example.

Create (or upgrade) the EKS Anywhere cluster

When creating (or upgrading) the EKS Anywhere cluster, you need to configure the kube-apiserver’s service-account-issuer flag so it can issue and mount projected service account tokens in pods. For this, use the value obtained in the first section for $ISSUER_HOSTPATH as the service-account-issuer. Configure the kube-apiserver by setting this value through the EKS Anywhere cluster spec:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
    name: my-cluster-name
spec:
    podIamConfig:
        serviceAccountIssuer: https://$ISSUER_HOSTPATH

Set the remaining fields in cluster spec as required and create the cluster.

Generate keys.json and make it publicly accessible

  1. The cluster provisioning workflow generates a pair of service account signing keys. Retrieve the public signing key from the cluster and create a keys.json document with the content.

    git clone https://github.com/aws/amazon-eks-pod-identity-webhook
    cd amazon-eks-pod-identity-webhook
    kubectl get secret ${CLUSTER_NAME}-sa -n eksa-system -o jsonpath={.data.tls\\.crt} | base64 --decode > ${CLUSTER_NAME}-sa.pub
    go run ./hack/self-hosted/main.go -key ${CLUSTER_NAME}-sa.pub | jq '.keys += [.keys[0]] | .keys[1].kid = ""' > keys.json
    
  2. Upload the keys.json document to the S3 bucket.

    aws s3 cp --acl public-read ./keys.json s3://$S3_BUCKET/keys.json
    

Deploy pod identity webhook

The Amazon Pod Identity Webhook configures pods with the necessary environment variables and tokens (via file mounts) to interact with AWS services. The webhook will configure any pod associated with a service account that has an eks-amazonaws.com/role-arn annotation.

  1. Clone amazon-eks-pod-identity-webhook .

  2. Set the $KUBECONFIG environment variable to the path of the EKS Anywhere cluster.

  3. Apply the manifests for the amazon-eks-pod-identity-webhook. The image used here will be pulled from docker.io. Optionally, the image can be imported into (or proxied through) your private registry. Change the IMAGE argument here to your private registry if needed.

    make cluster-up IMAGE=amazon/amazon-eks-pod-identity-webhook:latest
    
  4. Create a service account with an eks.amazonaws.com/role-arn annotation set to the IAM Role created for the OIDC provider.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: my-serviceaccount
      namespace: default
      annotations:
        # set this with value of OIDC_IAM_ROLE
        eks.amazonaws.com/role-arn: "arn:aws:iam::ACCOUNT_ID:role/s3-reader"
    
        # optional: Defaults to "sts.amazonaws.com" if not set
        eks.amazonaws.com/audience: "sts.amazonaws.com"
    
        # optional: When set to "true", adds AWS_STS_REGIONAL_ENDPOINTS env var
        #   to containers
        eks.amazonaws.com/sts-regional-endpoints: "true"
    
        # optional: Defaults to 86400 for expirationSeconds if not set
        #   Note: This value can be overwritten if specified in the pod
        #         annotation as shown in the next step.
        eks.amazonaws.com/token-expiration: "86400"
    
  5. Finally, apply the my-service-account.yaml file to create your service account.

    kubectl apply -f my-service-account.yaml
    
  6. You can validate IRSA by following IRSA setup and test . Ensure the awscli pod is deployed in the same namespace of ServiceAccount pod-identity-webhook.

Configure the trust relationship for the OIDC provider’s IAM Role

In order to grant certain service accounts access to the desired AWS resources, edit the trust relationship for the OIDC provider’s IAM Role (OIDC_IAM_ROLE) created in the first section, and add in the desired service accounts.

  1. Choose the role in the console to open it for editing.

  2. Choose the Trust relationships tab, and then choose Edit trust relationship.

  3. Find the line that looks similar to the following:

    "$ISSUER_HOSTPATH:aud": "sts.amazonaws.com"
    
  4. Change the line to look like the following line. Replace aud with sub and replace KUBERNETES_SERVICE_ACCOUNT_NAMESPACE and KUBERNETES_SERVICE_ACCOUNT_NAME with the name of your Kubernetes service account and the Kubernetes namespace that the account exists in.

    "$ISSUER_HOSTPATH:sub": "system:serviceaccount:KUBERNETES_SERVICE_ACCOUNT_NAMESPACE:KUBERNETES_SERVICE_ACCOUNT_NAME"
    

    The allow list example below applies my-serviceaccount service account to the default namespace and all service accounts to the observability namespace for the us-west-2 region. Remember to replace Account_ID and S3_BUCKET with the required values.

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Federated": "arn:aws:iam::$Account_ID:oidc-provider/s3.us-west-2.amazonaws.com/$S3_BUCKET"
                },
                "Action": "sts:AssumeRoleWithWebIdentity",
                "Condition": {
                    "StringLike": {
                        "s3.us-west-2.amazonaws.com/$S3_BUCKET:sub": [
                                "system:serviceaccount:default:my-serviceaccount",
                                "system:serviceaccount:amazon-cloudwatch:*"
                            ]
                        }
                    }
                }
            ]
        }
    
  5. Refer this doc for different ways of configuring one or multiple service accounts through the condition operators in the trust relationship.

  6. Choose Update Trust Policy to finish.

4.12.6 - IAM Authentication

EKS Anywhere cluster yaml specification AWS IAM Authenticator reference

AWS IAM Authenticator support (optional)

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

EKS Anywhere can create clusters that support AWS IAM Authenticator-based api server authentication. In order to add IAM Authenticator support, you need to configure your cluster by updating the configuration file before creating the cluster. This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   # IAM Authenticator support
   identityProviderRefs:
      - kind: AWSIamConfig
        name: aws-iam-auth-config
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: AWSIamConfig
metadata:
   name: aws-iam-auth-config
spec:
    awsRegion: ""
    backendMode:
        - ""
    mapRoles:
        - roleARN: arn:aws:iam::XXXXXXXXXXXX:role/myRole
          username: myKubernetesUsername
          groups:
          - ""
    mapUsers:
        - userARN: arn:aws:iam::XXXXXXXXXXXX:user/myUser
          username: myKubernetesUsername
          groups:
          - ""
    partition: ""

identityProviderRefs (Under Cluster)

List of identity providers you want configured for the Cluster. This would include a reference to the AWSIamConfig object with the configuration below.

awsRegion (required)

  • Description: awsRegion can be any region in the aws partition that the IAM roles exist in.
  • Type: string

backendMode (required)

  • Description: backendMode configures the IAM authenticator server’s backend mode (i.e. where to source mappings from). We support EKSConfigMap and CRD modes supported by AWS IAM Authenticator, for more details refer to backendMode
  • Type: string
  • Description: When using EKSConfigMap backendMode, we recommend providing either mapRoles or mapUsers to set the IAM role mappings at the time of creation. This input is added to an EKS style ConfigMap. For more details refer to EKS IAM

  • Type: list object

    roleARN, userARN (required)

    • Description: IAM ARN to authenticate to the cluster. roleARN specifies an IAM role and userARN specifies an IAM user.
    • Type: string

    username (required)

    • Description: The Kubernetes username the IAM ARN is mapped to in the cluster. The ARN gets mapped to the Kubernetes cluster permissions associated with the username.
    • Type: string

    groups

    • Description: List of kubernetes user groups that the mapped IAM ARN is given permissions to.
    • Type: list string

partition

  • Description: This field is used to set the aws partition that the IAM roles are present in. Default value is aws.
  • Type: string

4.12.7 - OIDC

EKS Anywhere cluster yaml specification OIDC reference

OIDC support (optional)

EKS Anywhere can create clusters that support api server OIDC authentication.

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

In order to add OIDC support, you need to configure your cluster by updating the configuration file to include the details below. The OIDC configuration can be added at cluster creation time, or introduced via a cluster upgrade in VMware and CloudStack.

This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   # OIDC support
   identityProviderRefs:
      - kind: OIDCConfig
        name: my-cluster-name
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: OIDCConfig
metadata:
   name: my-cluster-name
spec:
    clientId: ""
    groupsClaim: ""
    groupsPrefix: ""
    issuerUrl: "https://x"
    requiredClaims:
      - claim: ""
        value: ""
    usernameClaim: ""
    usernamePrefix: ""

identityProviderRefs (Under Cluster)

List of identity providers you want configured for the Cluster. This would include a reference to the OIDCConfig object with the configuration below.

clientId (required)

  • Description: ClientId defines the client ID for the OpenID Connect client
  • Type: string

groupsClaim (optional)

  • Description: GroupsClaim defines the name of a custom OpenID Connect claim for specifying user groups
  • Type: string

groupsPrefix (optional)

  • Description: GroupsPrefix defines a string to be prefixed to all groups to prevent conflicts with other authentication strategies
  • Type: string

issuerUrl (required)

  • Description: IssuerUrl defines the URL of the OpenID issuer, only HTTPS scheme will be accepted
  • Type: string

requiredClaims (optional)

List of RequiredClaim objects listed below. Only one is supported at this time.

requiredClaims[0] (optional)

  • Description: RequiredClaim defines a key=value pair that describes a required claim in the ID Token
    • claim
      • type: string
    • value
      • type: string
  • Type: object

usernameClaim (optional)

  • Description: UsernameClaim defines the OpenID claim to use as the user name. Note that claims other than the default (‘sub’) is not guaranteed to be unique and immutable
  • Type: string

usernamePrefix (optional)

  • Description: UsernamePrefix defines a string to be prefixed to all usernames. If not provided, username claims other than ‘email’ are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value ‘-’.
  • Type: string

4.12.8 - Proxy

EKS Anywhere cluster yaml specification proxy configuration reference

Proxy support (optional)

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

You can configure EKS Anywhere to use a proxy to connect to the Internet. This is the generic template with proxy configuration for your reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   proxyConfiguration:
      httpProxy: http-proxy-ip:port
      httpsProxy: https-proxy-ip:port
      noProxy:
      - list of no proxy endpoints

Configuring Docker daemon

EKS Anywhere will proxy for you given the above configuration file. However, to successfully use EKS Anywhere you will also need to ensure your Docker daemon is configured to use the proxy.

This generally means updating your daemon to launch with the HTTPS_PROXY, HTTP_PROXY, and NO_PROXY environment variables.

For an example of how to do this with systemd, please see Docker’s documentation here .

Configuring EKS Anywhere proxy without config file

For commands using a cluster config file, EKS Anywhere will derive its proxy config from the cluster configuration file.

However, for commands that do not utilize a cluster config file, you can set the following environment variables:

export HTTPS_PROXY=https-proxy-ip:port
export HTTP_PROXY=http-proxy-ip:port
export NO_PROXY=no-proxy-domain.com,another-domain.com,localhost

Proxy Configuration Spec Details

proxyConfiguration (required)

  • Description: top level key; required to use proxy.
  • Type: object

httpProxy (required)

  • Description: HTTP proxy to use to connect to the internet; must be in the format IP:port
  • Type: string
  • Example: httpProxy: 192.168.0.1:3218

httpsProxy (required)

  • Description: HTTPS proxy to use to connect to the internet; must be in the format IP:port
  • Type: string
  • Example: httpsProxy: 192.168.0.1:3218

noProxy (optional)

  • Description: list of endpoints that should not be routed through the proxy; can be an IP, CIDR block, or a domain name
  • Type: list of strings
  • Example
  noProxy:
   - localhost
   - 192.168.0.1
   - 192.168.0.0/16
   - .example.com

4.12.9 - MachineHealthCheck

EKS Anywhere cluster yaml specification for MachineHealthCheck configuration

MachineHealthCheck Support

Provider support details

vSphere Bare Metal Nutanix CloudStack Snow
Supported?

You can configure EKS Anywhere to specify timeouts and maxUnhealthy values for machine health checks.

A MachineHealthCheck (MHC) is a resource in Cluster API which allows users to define conditions under which Machines within a Cluster should be considered unhealthy. A MachineHealthCheck is defined on a management cluster and scoped to a particular workload cluster.

Note: Even though the MachineHealthCheck configuration in the EKS-A spec is optional, MachineHealthChecks are still installed for all clusters using the default values mentioned below.

EKS Anywhere allows users to have granular control over MachineHealthChecks in their cluster configuration, with default values (derived from Cluster API) being applied if the MHC is not configured in the spec. The top-level machineHealthCheck field governs the global MachineHealthCheck settings for all Machines (control-plane and worker). These global settings can be overridden through the nested machineHealthCheck field in the control plane configuration and each worker node configuration. If the nested MHC fields are not configured, then the top-level settings are applied to the respective Machines.

The following cluster spec shows an example of how to configure health check timeouts and maxUnhealthy:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
  machineHealthCheck:               # Top-level MachineHealthCheck configuration
    maxUnhealthy: "60%"
    nodeStartupTimeout: "10m0s"
    unhealthyMachineTimeout: "5m0s"
   ...
 controlPlaneConfiguration:         # MachineHealthCheck configuration for Control plane
    machineHealthCheck:
      maxUnhealthy: 100%
      nodeStartupTimeout: "15m0s"
      unhealthyMachineTimeout: 10m
   ...
  workerNodeGroupConfigurations:
  - count: 1
    name: md-0
    machineHealthCheck:             # MachineHealthCheck configuration for Worker Node Group 0
      maxUnhealthy: 100%
      nodeStartupTimeout: "10m0s"
      unhealthyMachineTimeout: 20m
  - count: 1
    name: md-1
    machineHealthCheck:             # MachineHealthCheck configuration for Worker Node Group 1
      maxUnhealthy: 100%
      nodeStartupTimeout: "10m0s"
      unhealthyMachineTimeout: 20m
   ...

MachineHealthCheck Spec Details

machineHealthCheck (optional)

  • Description: top-level key; required to configure global MachineHealthCheck timeouts and maxUnhealthy.
  • Type: object

machineHealthCheck.maxUnhealthy (optional)

  • Description: determines the maximum permissible number or percentage of unhealthy Machines in a cluster before further remediation is prevented. This ensures that MachineHealthChecks only remediate Machines when the cluster is healthy.
  • Default: 100% for control plane machines, 40% for worker nodes (Cluster API defaults).
  • Type: integer (count) or string (percentage)

machineHealthCheck.nodeStartupTimeout (optional)

  • Description: determines how long a MachineHealthCheck should wait for a Node to join the cluster, before considering a Machine unhealthy.
  • Default: 20m0s for Tinkerbell provider, 10m0s for all other providers.
  • Minimum Value (If configured): 30s
  • Type: string

machineHealthCheck.unhealthyMachineTimeout (optional)

  • Description: determines how long the unhealthy Node conditions (e.g., Ready=False, Ready=Unknown) should be matched for, before considering a Machine unhealthy.
  • Default: 5m0s
  • Type: string

controlPlaneConfiguration.machineHealthCheck (optional)

  • Description: Control plane level configuration for MachineHealthCheck timeouts and maxUnhealthy values.
  • Type: object

controlPlaneConfiguration.machineHealthCheck.maxUnhealthy (optional)

  • Description: determines the maximum permissible number or percentage of unhealthy control plane Machines in a cluster before further remediation is prevented. This ensures that MachineHealthChecks only remediate Machines when the cluster is healthy.
  • Default: Top-level MHC maxUnhealthy if set or 100% otherwise.
  • Type: integer (count) or string (percentage)

controlPlaneConfiguration.machineHealthCheck.nodeStartupTimeout (optional)

  • Description: determines how long a MachineHealthCheck should wait for a control plane Node to join the cluster, before considering the Machine unhealthy.
  • Default: Top-level MHC nodeStartupTimeout if set or 20m0s for Tinkerbell provider, 10m0s for all other providers otherwise.
  • Minimum Value (if configured): 30s
  • Type: string